Almost periodic cellular neural networks with neutral-type proportional delays
NASA Astrophysics Data System (ADS)
Xiao, Songlin
2018-03-01
This paper presents a new result on the existence, uniqueness and generalised exponential stability of almost periodic solutions for cellular neural networks with neutral-type proportional delays and D operator. Based on some novel differential inequality techniques, a testable condition is derived to ensure that all the state trajectories of the system converge to an almost periodic solution with a positive exponential convergence rate. The effectiveness of the obtained result is illustrated by a numerical example.
Continental collision slowing due to viscous mantle lithosphere rather than topography.
Clark, Marin Kristen
2012-02-29
Because the inertia of tectonic plates is negligible, plate velocities result from the balance of forces acting at plate margins and along their base. Observations of past plate motion derived from marine magnetic anomalies provide evidence of how continental deformation may contribute to plate driving forces. A decrease in convergence rate at the inception of continental collision is expected because of the greater buoyancy of continental than oceanic lithosphere, but post-collisional rates are less well understood. Slowing of convergence has generally been attributed to the development of high topography that further resists convergent motion; however, the role of deforming continental mantle lithosphere on plate motions has not previously been considered. Here I show that the rate of India's penetration into Eurasia has decreased exponentially since their collision. The exponential decrease in convergence rate suggests that contractional strain across Tibet has been constant throughout the collision at a rate of 7.03 × 10(-16) s(-1), which matches the current rate. A constant bulk strain rate of the orogen suggests that convergent motion is resisted by constant average stress (constant force) applied to a relatively uniform layer or interface at depth. This finding follows new evidence that the mantle lithosphere beneath Tibet is intact, which supports the interpretation that the long-term strain history of Tibet reflects deformation of the mantle lithosphere. Under conditions of constant stress and strength, the deforming continental lithosphere creates a type of viscous resistance that affects plate motion irrespective of how topography evolved.
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
Nonlinear stability of the 1D Boltzmann equation in a periodic box
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2018-05-01
We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Convergence and rate analysis of neural networks for sparse approximation.
Balavoine, Aurèle; Romberg, Justin; Rozell, Christopher J
2012-09-01
We present an analysis of the Locally Competitive Algorithm (LCA), which is a Hopfield-style neural network that efficiently solves sparse approximation problems (e.g., approximating a vector from a dictionary using just a few nonzero coefficients). This class of problems plays a significant role in both theories of neural coding and applications in signal processing. However, the LCA lacks analysis of its convergence properties, and previous results on neural networks for nonsmooth optimization do not apply to the specifics of the LCA architecture. We show that the LCA has desirable convergence properties, such as stability and global convergence to the optimum of the objective function when it is unique. Under some mild conditions, the support of the solution is also proven to be reached in finite time. Furthermore, some restrictions on the problem specifics allow us to characterize the convergence rate of the system by showing that the LCA converges exponentially fast with an analytically bounded convergence rate. We support our analysis with several illustrative simulations.
Finite-time containment control of perturbed multi-agent systems based on sliding-mode control
NASA Astrophysics Data System (ADS)
Yu, Di; Ji, Xiang Yang
2018-01-01
Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.
Basis convergence of range-separated density-functional theory.
Franck, Odile; Mussard, Bastien; Luppi, Eleonora; Toulouse, Julien
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. We study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N2, and H2O) with cardinal number X of the Dunning basis sets cc - p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
Basis convergence of range-separated density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franck, Odile, E-mail: odile.franck@etu.upmc.fr; Mussard, Bastien, E-mail: bastien.mussard@upmc.fr; CNRS, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. Wemore » study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N{sub 2}, and H{sub 2}O) with cardinal number X of the Dunning basis sets cc − p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.« less
NASA Astrophysics Data System (ADS)
Fellner, Klemens; Tang, Bao Quoc
2018-06-01
The convergence to equilibrium for renormalised solutions to nonlinear reaction-diffusion systems is studied. The considered reaction-diffusion systems arise from chemical reaction networks with mass action kinetics and satisfy the complex balanced condition. By applying the so-called entropy method, we show that if the system does not have boundary equilibria, i.e. equilibrium states lying on the boundary of R_+^N, then any renormalised solution converges exponentially to the complex balanced equilibrium with a rate, which can be computed explicitly up to a finite-dimensional inequality. This inequality is proven via a contradiction argument and thus not explicitly. An explicit method of proof, however, is provided for a specific application modelling a reversible enzyme reaction by exploiting the specific structure of the conservation laws. Our approach is also useful to study the trend to equilibrium for systems possessing boundary equilibria. More precisely, to show the convergence to equilibrium for systems with boundary equilibria, we establish a sufficient condition in terms of a modified finite-dimensional inequality along trajectories of the system. By assuming this condition, which roughly means that the system produces too much entropy to stay close to a boundary equilibrium for infinite time, the entropy method shows exponential convergence to equilibrium for renormalised solutions to complex balanced systems with boundary equilibria.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Song, Qiankun
2006-07-01
In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Li, Kelin
2010-02-01
In this article, a class of impulsive bidirectional associative memory (BAM) fuzzy cellular neural networks (FCNNs) with time-varying delays is formulated and investigated. By employing delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM FCNNs with time-varying delays are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM FCNNs. An example is given to show the effectiveness of the results obtained here.
Necessary conditions for weighted mean convergence of Lagrange interpolation for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.; Kwon, K. H.
2001-07-01
Given a continuous real-valued function f which vanishes outside a fixed finite interval, we establish necessary conditions for weighted mean convergence of Lagrange interpolation for a general class of even weights w which are of exponential decay on the real line or at the endpoints of (-1,1).
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve
1992-01-01
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.
Exponentially convergent state estimation for delayed switched recurrent neural networks.
Ahn, Choon Ki
2011-11-01
This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.
Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.
2005-01-01
For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.
Exponential stability preservation in semi-discretisations of BAM networks with nonlinear impulses
NASA Astrophysics Data System (ADS)
Mohamad, Sannay; Gopalsamy, K.
2009-01-01
This paper demonstrates the reliability of a discrete-time analogue in preserving the exponential convergence of a bidirectional associative memory (BAM) network that is subject to nonlinear impulses. The analogue derived from a semi-discretisation technique with the value of the time-step fixed is treated as a discrete-time dynamical system while its exponential convergence towards an equilibrium state is studied. Thereby, a family of sufficiency conditions governing the network parameters and the impulse magnitude and frequency is obtained for the convergence. As special cases, one can obtain from our results, those corresponding to the non-impulsive discrete-time BAM networks and also those corresponding to continuous-time (impulsive and non-impulsive) systems. A relation between the Lyapunov exponent of the non-impulsive system and that of the impulsive system involving the size of the impulses and the inter-impulse intervals is obtained.
Locality of the Thomas-Fermi-von Weizsäcker Equations
NASA Astrophysics Data System (ADS)
Nazar, F. Q.; Ortner, C.
2017-06-01
We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.
Optimal sparse approximation with integrate and fire neurons.
Shapero, Samuel; Zhu, Mengchen; Hasler, Jennifer; Rozell, Christopher
2014-08-01
Sparse approximation is a hypothesized coding strategy where a population of sensory neurons (e.g. V1) encodes a stimulus using as few active neurons as possible. We present the Spiking LCA (locally competitive algorithm), a rate encoded Spiking Neural Network (SNN) of integrate and fire neurons that calculate sparse approximations. The Spiking LCA is designed to be equivalent to the nonspiking LCA, an analog dynamical system that converges on a ℓ(1)-norm sparse approximations exponentially. We show that the firing rate of the Spiking LCA converges on the same solution as the analog LCA, with an error inversely proportional to the sampling time. We simulate in NEURON a network of 128 neuron pairs that encode 8 × 8 pixel image patches, demonstrating that the network converges to nearly optimal encodings within 20 ms of biological time. We also show that when using more biophysically realistic parameters in the neurons, the gain function encourages additional ℓ(0)-norm sparsity in the encoding, relative both to ideal neurons and digital solvers.
Deformed exponentials and portfolio selection
NASA Astrophysics Data System (ADS)
Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro
In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.
NASA Astrophysics Data System (ADS)
Costa, João L.; Girão, Pedro M.; Natário, José; Silva, Jorge Drumond
2018-03-01
In this paper we study the spherically symmetric characteristic initial data problem for the Einstein-Maxwell-scalar field system with a positive cosmological constant in the interior of a black hole, assuming an exponential Price law along the event horizon. More precisely, we construct open sets of characteristic data which, on the outgoing initial null hypersurface (taken to be the event horizon), converges exponentially to a reference Reissner-Nördstrom black hole at infinity. We prove the stability of the radius function at the Cauchy horizon, and show that, depending on the decay rate of the initial data, mass inflation may or may not occur. In the latter case, we find that the solution can be extended across the Cauchy horizon with continuous metric and Christoffel symbols in {L^2_{loc}} , thus violating the Christodoulou-Chruściel version of strong cosmic censorship.
NASA Astrophysics Data System (ADS)
Rana, B. M. Jewel; Ahmed, Rubel; Ahmmed, S. F.
2017-06-01
An analysis is carried out to investigate the effects of variable viscosity, thermal radiation, absorption of radiation and cross diffusion past an inclined exponential accelerated plate under the influence of variable heat and mass transfer. A set of suitable transformations has been used to obtain the non-dimensional coupled governing equations. Explicit finite difference technique has been used to solve the obtained numerical solutions of the present problem. Stability and convergence of the finite difference scheme have been carried out for this problem. Compaq Visual Fortran 6.6a has been used to calculate the numerical results. The effects of various physical parameters on the fluid velocity, temperature, concentration, coefficient of skin friction, rate of heat transfer, rate of mass transfer, streamlines and isotherms on the flow field have been presented graphically and discussed in details.
ERIC Educational Resources Information Center
Aleman, Enrique, Jr.; Aleman, Sonya M.
2010-01-01
The interest-convergence principle proposes that change benefitting people and communities of color only occurs when those interests also benefit Whites. As newly transplanted Chicano/a residents of a state facing exponential growth of its Latino immigrant population, we have attempted to counter the efforts criminalizing members of our Latino/a…
NASA Astrophysics Data System (ADS)
Ma, Shuo; Kang, Yanmei
2018-04-01
In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.
Multidimensional Extension of the Generalized Chowla-Selberg Formula
NASA Astrophysics Data System (ADS)
Elizalde, E.
After recalling the precise existence conditions of the zeta function of a pseudodifferential operator, and the concept of reflection formula, an exponentially convergent expression for the analytic continuation of a multidimensional inhomogeneous Epstein-type zeta function of the general form
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can only be determined by calculation. A Desirability Function is constructed that simultaneously measures how well multiple intervals fit in a given equal-tempered system. These measurements are made for octave (base 2) and tritave systems (base 3). Combinatorial properties important to music modulation are considered. These considerations lead These considerations lead to the construction of maximally even scales as partitions of an equal-tempered system.
NASA Astrophysics Data System (ADS)
Balint, Stefan; Balint, Agneta M.
2017-01-01
Different types of stabilities (global, local) and instabilities (global absolute, local convective) of the constant spatially developing 1-D gas flow are analyzed in the phase space of continuously differentiable functions, endowed with the usual algebraic operations and the topology generated by the uniform convergence on the real axis. For this purpose the Euler equations linearized at the constant flow are used. The Lyapunov stability analysis was presented in [1] and this paper is a continuation of [1].
Ryde, Ulf
2017-11-14
Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Convergence of neural networks for programming problems via a nonsmooth Lojasiewicz inequality.
Forti, Mauro; Nistri, Paolo; Quincampoix, Marc
2006-11-01
This paper considers a class of neural networks (NNs) for solving linear programming (LP) problems, convex quadratic programming (QP) problems, and nonconvex QP problems where an indefinite quadratic objective function is subject to a set of affine constraints. The NNs are characterized by constraint neurons modeled by ideal diodes with vertical segments in their characteristic, which enable to implement an exact penalty method. A new method is exploited to address convergence of trajectories, which is based on a nonsmooth Lojasiewicz inequality for the generalized gradient vector field describing the NN dynamics. The method permits to prove that each forward trajectory of the NN has finite length, and as a consequence it converges toward a singleton. Furthermore, by means of a quantitative evaluation of the Lojasiewicz exponent at the equilibrium points, the following results on convergence rate of trajectories are established: (1) for nonconvex QP problems, each trajectory is either exponentially convergent, or convergent in finite time, toward a singleton belonging to the set of constrained critical points; (2) for convex QP problems, the same result as in (1) holds; moreover, the singleton belongs to the set of global minimizers; and (3) for LP problems, each trajectory converges in finite time to a singleton belonging to the set of global minimizers. These results, which improve previous results obtained via the Lyapunov approach, are true independently of the nature of the set of equilibrium points, and in particular they hold even when the NN possesses infinitely many nonisolated equilibrium points.
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Tang, Ze; Park, Ju H; Feng, Jianwen
2018-04-01
This paper is concerned with the exponential synchronization issue of nonidentically coupled neural networks with time-varying delay. Due to the parameter mismatch phenomena existed in neural networks, the problem of quasi-synchronization is thus discussed by applying some impulsive control strategies. Based on the definition of average impulsive interval and the extended comparison principle for impulsive systems, some criteria for achieving the quasi-synchronization of neural networks are derived. More extensive ranges of impulsive effects are discussed so that impulse could either play an effective role or play an adverse role in the final network synchronization. In addition, according to the extended formula for the variation of parameters with time-varying delay, precisely exponential convergence rates and quasi-synchronization errors are obtained, respectively, in view of different types impulsive effects. Finally, some numerical simulations with different types of impulsive effects are presented to illustrate the effectiveness of theoretical analysis.
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-10-01
The stability property of stochastic hybrid bidirectional associate memory (BAM) neural networks with discrete delays is considered. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the delay-independent sufficient conditions to guarantee the exponential stability of the equilibrium solution for such networks are given by using the nonnegative semimartingale convergence theorem.
NASA Astrophysics Data System (ADS)
Lee, Bum Han; Lee, Sung Keun
2017-10-01
The effect of the structural heterogeneity of porous networks on the water distribution in porous media, initially saturated with immiscible fluid followed by increasing durations of water injection, remains one of the important problems in hydrology. The relationship among convergence rates (i.e., the rate of fluid saturation with varying injection time) and the macroscopic properties and structural parameters of porous media have been anticipated. Here, we used nuclear magnetic resonance (NMR) micro-imaging to obtain images (down to ∼50 μm resolution) of the distribution of water injected for varying durations into porous networks that were initially saturated with silicone oil. We then established the relationships among the convergence rates, structural parameters, and transport properties of porous networks. The volume fraction of the water phase increases as the water injection duration increases. The 3D images of the water distributions for silica gel samples are similar to those of the glass bead samples. The changes in water saturation (and the accompanying removal of silicone oil) and the variations in the volume fraction, specific surface area, and cube-counting fractal dimension of the water phase fit well with the single-exponential recovery function { f (t) = a [ 1 -exp (- λt) ] } . The asymptotic values (a, i.e., saturated value) of the properties of the volume fraction, specific surface area, and cube-counting fractal dimension of the glass bead samples were greater than those for the silica gel samples primarily because of the intrinsic differences in the porous networks and local distribution of the pore size and connectivity. The convergence rates of all of the properties are inversely proportional to the entropy length and permeability. Despite limitations of the current study, such as insufficient resolution and uncertainty for the estimated parameters due to sparsely selected short injection times, the observed trends highlight the first analyses of the cube-counting fractal dimension (and other structural properties) and convergence rates in porous networks consisting of two fluid components. These results indicate that the convergence rates correlate with the geometric factor that characterizes the porous networks and transport property of the porous networks.
Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum
2017-12-01
Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
A viscoplastic shear-zone model for deep (15-50 km) slow-slip events at plate convergent margins
NASA Astrophysics Data System (ADS)
Yin, An; Xie, Zhoumin; Meng, Lingsen
2018-06-01
A key issue in understanding the physics of deep (15-50 km) slow-slip events (D-SSE) at plate convergent margins is how their initially unstable motion becomes stabilized. Here we address this issue by quantifying a rate-strengthening mechanism using a viscoplastic shear-zone model inspired by recent advances in field observations and laboratory experiments. The well-established segmentation of slip modes in the downdip direction of a subduction shear zone allows discretization of an interseismic forearc system into the (1) frontal segment bounded by an interseismically locked megathrust, (2) middle segment bounded by episodically locked and unlocked viscoplastic shear zone, and (3) interior segment that slips freely. The three segments are assumed to be linked laterally by two springs that tighten with time, and the increasing elastic stress due to spring tightening eventually leads to plastic failure and initial viscous shear. This simplification leads to seven key model parameters that dictate a wide range of mechanical behaviors of an idealized convergent margin. Specifically, the viscoplastic rheology requires the initially unstable sliding to be terminated nearly instantaneously at a characteristic velocity, which is followed by stable sliding (i.e., slow-slip). The characteristic velocity, which is on the order of <10-7 m/s for the convergent margins examined in this study, depends on the (1) effective coefficient of friction, (2) thickness, (3) depth, and (4) viscosity of the viscoplastic shear zone. As viscosity decreases exponentially with temperature, our model predicts faster slow-slip rates, shorter slow-slip durations, more frequent slow-slip occurrences, and larger slow-slip magnitudes at warmer convergent margins.
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is always rapidly mixing for two general classes of positively biased { pxy}. More significantly, we also prove that the general conjecture is false by exhibiting values for the pxy, with 1/2 ≤ pxy ≤ 1 for all x < y, but for which the transposition chain will require exponential time to converge. Finally, we consider a model of colloids, which are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. This clustering has proved elusive to verify, since all local sampling algorithms are known to be inefficient at high density, and in fact a new nonlocal algorithm was recently shown to require exponential time in some cases. We characterize the high and low density phases for a general family of discrete interfering binary mixtures by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area, very small perimeter, and high density of one type of molecule. Special cases of interfering binary mixtures include the Ising model at fixed magnetization and independent sets.
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
Existence and exponential stability of traveling waves for delayed reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Hsu, Cheng-Hsiung; Yang, Tzi-Sheng; Yu, Zhixian
2018-03-01
The purpose of this work is to investigate the existence and exponential stability of traveling wave solutions for general delayed multi-component reaction-diffusion systems. Following the monotone iteration scheme via an explicit construction of a pair of upper and lower solutions, we first obtain the existence of monostable traveling wave solutions connecting two different equilibria. Then, applying the techniques of weighted energy method and comparison principle, we show that all solutions of the Cauchy problem for the considered systems converge exponentially to traveling wave solutions provided that the initial perturbations around the traveling wave fronts belong to a suitable weighted Sobolev space.
NASA Technical Reports Server (NTRS)
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
A switched systems approach to image-based estimation
NASA Astrophysics Data System (ADS)
Parikh, Anup
With the advent of technological improvements in imaging systems and computational resources, as well as the development of image-based reconstruction techniques, it is necessary to understand algorithm performance when subject to real world conditions. Specifically, this dissertation focuses on the stability and performance of a class of image-based observers in the presence of intermittent measurements, caused by e.g., occlusions, limited FOV, feature tracking losses, communication losses, or finite frame rates. Observers or filters that are exponentially stable under persistent observability may have unbounded error growth during intermittent sensing, even while providing seemingly accurate state estimates. In Chapter 3, dwell time conditions are developed to guarantee state estimation error convergence to an ultimate bound for a class of observers while undergoing measurement loss. Bounds are developed on the unstable growth of the estimation errors during the periods when the object being tracked is not visible. A Lyapunov-based analysis for the switched system is performed to develop an inequality in terms of the duration of time the observer can view the moving object and the duration of time the object is out of the field of view. In Chapter 4, a motion model is used to predict the evolution of the states of the system while the object is not visible. This reduces the growth rate of the bounding function to an exponential and enables the use of traditional switched systems Lyapunov analysis techniques. The stability analysis results in an average dwell time condition to guarantee state error convergence with a known decay rate. In comparison with the results in Chapter 3, the estimation errors converge to zero rather than a ball, with relaxed switching conditions, at the cost of requiring additional information about the motion of the feature. In some applications, a motion model of the object may not be available. Numerous adaptive techniques have been developed to compensate for unknown parameters or functions in system dynamics; however, persistent excitation (PE) conditions are typically required to ensure parameter convergence, i.e., learning. Since the motion model is needed in the predictor, model learning is desired; however, PE is difficult to insure a priori and infeasible to check online for nonlinear systems. Concurrent learning (CL) techniques have been developed to use recorded data and a relaxed excitation condition to ensure convergence. In CL, excitation is only required for a finite period of time, and the recorded data can be checked to determine if it is sufficiently rich. However, traditional CL requires knowledge of state derivatives, which are typically not measured and require extensive filter design and tuning to develop satisfactory estimates. In Chapter 5 of this dissertation, a novel formulation of CL is developed in terms of an integral (ICL), removing the need to estimate state derivatives while preserving parameter convergence properties. Using ICL, an estimator is developed in Chapter 6 for simultaneously estimating the pose of an object as well as learning a model of its motion for use in a predictor when the object is not visible. A switched systems analysis is provided to demonstrate the stability of the estimation and prediction with learning scheme. Dwell time conditions as well as excitation conditions are developed to ensure estimation errors converge to an arbitrarily small bound. Experimental results are provided to illustrate the performance of each of the developed estimation schemes. The dissertation concludes with a discussion of the contributions and limitations of the developed techniques, as well as avenues for future extensions.
CREKID: A computer code for transient, gas-phase combustion of kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1984-01-01
A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.
Simple robust control laws for robot manipulators. Part 1: Non-adaptive case
NASA Technical Reports Server (NTRS)
Wen, J. T.; Bayard, D. S.
1987-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is introduced. It has been recently recognized that the nonlinear dynamics associated with robotic manipulators have certain inherent passivity properties. More specifically, the derivation of the robotic dynamic equations from the Hamilton's principle gives rise to natural Lyapunov functions for control design based on total energy considerations. Through a slight modification of the energy Lyapunov function and the use of a convenient lemma to handle third order terms in the Lyapunov function derivatives, closed loop exponential stability for both the set point and tracking control problem is demonstrated. The exponential convergence property also leads to robustness with respect to frictions, bounded modeling errors and instrument noise. In one new design, the nonlinear terms are decoupled from real-time measurements which completely removes the requirement for on-line computation of nonlinear terms in the controller implementation. In general, the new class of control laws offers alternatives to the more conventional computed torque method, providing tradeoffs between robustness, computation and convergence properties. Furthermore, these control laws have the unique feature that they can be adapted in a very simple fashion to achieve asymptotically stable adaptive control.
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
NASA Astrophysics Data System (ADS)
Datta, Nilanjana; Pautrat, Yan; Rouzé, Cambyse
2016-06-01
Quantum Stein's lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ⊗n or σ⊗n) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability αn of erroneously inferring the state to be σ, the probability βn of erroneously inferring the state to be ρ decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.
Numerical Solution of Dyson Brownian Motion and a Sampling Scheme for Invariant Matrix Ensembles
NASA Astrophysics Data System (ADS)
Li, Xingjie Helen; Menon, Govind
2013-12-01
The Dyson Brownian Motion (DBM) describes the stochastic evolution of N points on the line driven by an applied potential, a Coulombic repulsion and identical, independent Brownian forcing at each point. We use an explicit tamed Euler scheme to numerically solve the Dyson Brownian motion and sample the equilibrium measure for non-quadratic potentials. The Coulomb repulsion is too singular for the SDE to satisfy the hypotheses of rigorous convergence proofs for tamed Euler schemes (Hutzenthaler et al. in Ann. Appl. Probab. 22(4):1611-1641, 2012). Nevertheless, in practice the scheme is observed to be stable for time steps of O(1/ N 2) and to relax exponentially fast to the equilibrium measure with a rate constant of O(1) independent of N. Further, this convergence rate appears to improve with N in accordance with O(1/ N) relaxation of local statistics of the Dyson Brownian motion. This allows us to use the Dyson Brownian motion to sample N× N Hermitian matrices from the invariant ensembles. The computational cost of generating M independent samples is O( MN 4) with a naive scheme, and O( MN 3log N) when a fast multipole method is used to evaluate the Coulomb interaction.
An investigation of the convergence to the stationary state in the Hassell mapping
NASA Astrophysics Data System (ADS)
de Mendonça, Hans M. J.; Leonel, Edson D.; de Oliveira, Juliano A.
2017-01-01
We investigate the convergence to the fixed point and near it in a transcritical bifurcation observed in a Hassell mapping. We considered a phenomenological description which was reinforced by a theoretical description. At the bifurcation, we confirm the convergence for the fixed point is characterized by a homogeneous function with three exponents. Near the bifurcation the decay to the fixed point is exponential with a relaxation time given by a power law. Although the expression of the mapping is different from the traditional logistic mapping, at the bifurcation and near it, the local dynamics is essentially the same for either mappings.
Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Martin, Corless
2004-01-01
We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
A revised dislocation model of interseismic deformation of the Cascadia subduction zone
Wang, Kelin; Wells, Ray E.; Mazzotti, Stephane; Hyndman, Roy D.; Sagiya, Takeshi
2003-01-01
CAS3D‐2, a new three‐dimensional (3‐D) dislocation model, is developed to model interseismic deformation rates at the Cascadia subduction zone. The model is considered a snapshot description of the deformation field that changes with time. The effect of northward secular motion of the central and southern Cascadia forearc sliver is subtracted to obtain the effective convergence between the subducting plate and the forearc. Horizontal deformation data, including strain rates and surface velocities from Global Positioning System (GPS) measurements, provide primary geodetic constraints, but uplift rate data from tide gauges and leveling also provide important validations for the model. A locked zone, based on the results of previous thermal models constrained by heat flow observations, is located entirely offshore beneath the continental slope. Similar to previous dislocation models, an effective zone of downdip transition from locking to full slip is used, but the slip deficit rate is assumed to decrease exponentially with downdip distance. The exponential function resolves the problem of overpredicting coastal GPS velocities and underpredicting inland velocities by previous models that used a linear downdip transition. A wide effective transition zone (ETZ) partially accounts for stress relaxation in the mantle wedge that cannot be simulated by the elastic model. The pattern of coseismic deformation is expected to be different from that of interseismic deformation at present, 300 years after the last great subduction earthquake. The downdip transition from full rupture to no slip should take place over a much narrower zone.
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.
2006-06-01
It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.
NASA Astrophysics Data System (ADS)
Gu, Zhou; Fei, Shumin; Yue, Dong; Tian, Engang
2014-07-01
This paper deals with the problem of H∞ filtering for discrete-time systems with stochastic missing measurements. A new missing measurement model is developed by decomposing the interval of the missing rate into several segments. The probability of the missing rate in each subsegment is governed by its corresponding random variables. We aim to design a linear full-order filter such that the estimation error converges to zero exponentially in the mean square with a less conservatism while the disturbance rejection attenuation is constrained to a given level by means of an H∞ performance index. Based on Lyapunov theory, the reliable filter parameters are characterised in terms of the feasibility of a set of linear matrix inequalities. Finally, a numerical example is provided to demonstrate the effectiveness and applicability of the proposed design approach.
Rapid Genetic Adaptation during the First Four Months of Survival under Resource Exhaustion.
Avrani, Sarit; Bolotin, Evgeni; Katz, Sophia; Hershberg, Ruth
2017-07-01
Many bacteria, including the model bacterium Escherichia coli can survive for years within spent media, following resource exhaustion. We carried out evolutionary experiments, followed by whole genome sequencing of hundreds of evolved clones to study the dynamics by which E. coli adapts during the first 4 months of survival under resource exhaustion. Our results reveal that bacteria evolving under resource exhaustion are subject to intense selection, manifesting in rapid mutation accumulation, enrichment in functional mutation categories and extremely convergent adaptation. In the most striking example of convergent adaptation, we found that across five independent populations adaptation to conditions of resource exhaustion occurs through mutations to the three same specific positions of the RNA polymerase core enzyme. Mutations to these three sites are strongly antagonistically pleiotropic, in that they sharply reduce exponential growth rates in fresh media. Such antagonistically pleiotropic mutations, combined with the accumulation of additional mutations, severely reduce the ability of bacteria surviving under resource exhaustion to grow exponentially in fresh media. We further demonstrate that the three positions at which these resource exhaustion mutations occur are conserved for the ancestral E. coli allele, across bacterial phyla, with the exception of nonculturable bacteria that carry the resource exhaustion allele at one of these positions, at very high frequencies. Finally, our results demonstrate that adaptation to resource exhaustion is not limited by mutational input and that bacteria are able to rapidly adapt under resource exhaustion in a temporally precise manner through allele frequency fluctuations. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Datta, Nilanjana; Rouzé, Cambyse; Pautrat, Yan
2016-06-15
Quantum Stein’s lemma is a cornerstone of quantum statistics and concerns the problem of correctly identifying a quantum state, given the knowledge that it is one of two specific states (ρ or σ). It was originally derived in the asymptotic i.i.d. setting, in which arbitrarily many (say, n) identical copies of the state (ρ{sup ⊗n} or σ{sup ⊗n}) are considered to be available. In this setting, the lemma states that, for any given upper bound on the probability α{sub n} of erroneously inferring the state to be σ, the probability β{sub n} of erroneously inferring the state to be ρmore » decays exponentially in n, with the rate of decay converging to the relative entropy of the two states. The second order asymptotics for quantum hypothesis testing, which establishes the speed of convergence of this rate of decay to its limiting value, was derived in the i.i.d. setting independently by Tomamichel and Hayashi, and Li. We extend this result to settings beyond i.i.d. Examples of these include Gibbs states of quantum spin systems (with finite-range, translation-invariant interactions) at high temperatures, and quasi-free states of fermionic lattice gases.« less
Exponentially accurate approximations to piece-wise smooth periodic functions
NASA Technical Reports Server (NTRS)
Greer, James; Banerjee, Saheb
1995-01-01
A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.
Computing Rydberg Electron Transport Rates Using Periodic Orbits
NASA Astrophysics Data System (ADS)
Sattari, Sulimon; Mitchel, Kevin
2017-04-01
Electron transport rates in chaotic atomic systems are computable from classical periodic orbits. This technique allows for replacing a Monte Carlo simulation launching millions of orbits with a sum over tens or hundreds of properly chosen periodic orbits using a formula called the spectral determiant. A firm grasp of the structure of the periodic orbits is required to obtain accurate transport rates. We apply a technique called homotopic lobe dynamics (HLD) to understand the structure of periodic orbits to compute the ionization rate in a classically chaotic atomic system, namely the hydrogen atom in strong parallel electric and magnetic fields. HLD uses information encoded in the intersections of stable and unstable manifolds of a few orbits to compute relevant periodic orbits in the system. All unstable periodic orbits are computed up to a given period, and the ionization rate computed from periodic orbits converges exponentially to the true value as a function of the period used. Using periodic orbit continuation, the ionization rate is computed over a range of electron energy and magnetic field values. The future goal of this work is to semiclassically compute quantum resonances using periodic orbits.
Zhang, Honghu
2006-04-01
The acoustical radiosity method is a computationally expensive acoustical simulation algorithm that assumes an enclosure with ideal diffuse reflecting boundaries. Miles observed that for such an enclosure, the sound energy decay of every point on the boundaries will gradually converge to exponential manner with a uniform decay rate. Therefore, the ratio of radiosity between every pair of points on the boundaries will converge to a constant, and the radiosity across the boundaries will approach a fixed distribution during the sound decay process, where radiosity is defined as the acoustic power per unit area leaving (or being received by) a point on a boundary. We call this phenomenon the "relaxation" of the sound field. In this paper, we study the relaxation in rooms of different shapes with different boundary absorptions. Criteria based on the relaxation of the sound field are proposed to terminate the costly and unnecessary radiosity computation in the later phase, which can then be replaced by a fast regression step to speed up the acoustical radiosity simulation.
NASA Technical Reports Server (NTRS)
Childs, A. G.
1971-01-01
A discrete steepest ascent method which allows controls which are not piecewise constant (for example, it allows all continuous piecewise linear controls) was derived for the solution of optimal programming problems. This method is based on the continuous steepest ascent method of Bryson and Denham and new concepts introduced by Kelley and Denham in their development of compatible adjoints for taking into account the effects of numerical integration. The method is a generalization of the algorithm suggested by Canon, Cullum, and Polak with the details of the gradient computation given. The discrete method was compared with the continuous method for an aerodynamics problem for which an analytic solution is given by Pontryagin's maximum principle, and numerical results are presented. The discrete method converges more rapidly than the continuous method at first, but then for some undetermined reason, loses its exponential convergence rate. A comparsion was also made for the algorithm of Canon, Cullum, and Polak using piecewise constant controls. This algorithm is very competitive with the continuous algorithm.
NASA Technical Reports Server (NTRS)
Watson, Robert A.
1991-01-01
Approximate solutions of static and dynamic beam problems by the p-version of the finite element method are investigated. Within a hierarchy of engineering beam idealizations, rigorous formulations of the strain and kinetic energies for straight and circular beam elements are presented. These formulations include rotating coordinate system effects and geometric nonlinearities to allow for the evaluation of vertical axis wind turbines, the motivating problem for this research. Hierarchic finite element spaces, based on extensions of the polynomial orders used to approximate the displacement variables, are constructed. The developed models are implemented into a general purpose computer program for evaluation. Quality control procedures are examined for a diverse set of sample problems. These procedures include estimating discretization errors in energy norm and natural frequencies, performing static and dynamic equilibrium checks, observing convergence for qualities of interest, and comparison with more exacting theories and experimental data. It is demonstrated that p-extensions produce exponential rates of convergence in the approximation of strain energy and natural frequencies for the class of problems investigated.
Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines
NASA Astrophysics Data System (ADS)
Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin
2018-03-01
In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.
A Numerical, Literal, and Converged Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Wiesel, William E.
2017-09-01
The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
On convergence of the unscented Kalman-Bucy filter using contraction theory
NASA Astrophysics Data System (ADS)
Maree, J. P.; Imsland, L.; Jouffroy, J.
2016-06-01
Contraction theory entails a theoretical framework in which convergence of a nonlinear system can be analysed differentially in an appropriate contraction metric. This paper is concerned with utilising stochastic contraction theory to conclude on exponential convergence of the unscented Kalman-Bucy filter. The underlying process and measurement models of interest are Itô-type stochastic differential equations. In particular, statistical linearisation techniques are employed in a virtual-actual systems framework to establish deterministic contraction of the estimated expected mean of process values. Under mild conditions of bounded process noise, we extend the results on deterministic contraction to stochastic contraction of the estimated expected mean of the process state. It follows that for the regions of contraction, a result on convergence, and thereby incremental stability, is concluded for the unscented Kalman-Bucy filter. The theoretical concepts are illustrated in two case studies.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
Makri, Nancy
2014-10-07
The real-time path integral representation of the reduced density matrix for a discrete system in contact with a dissipative medium is rewritten in terms of the number of blips, i.e., elementary time intervals over which the forward and backward paths are not identical. For a given set of blips, it is shown that the path sum with respect to the coordinates of all remaining time points is isomorphic to that for the wavefunction of a system subject to an external driving term and thus can be summed by an inexpensive iterative procedure. This exact decomposition reduces the number of terms by a factor that increases exponentially with propagation time. Further, under conditions (moderately high temperature and/or dissipation strength) that lead primarily to incoherent dynamics, the "fully incoherent limit" zero-blip term of the series provides a reasonable approximation to the dynamics, and the blip series converges rapidly to the exact result. Retention of only the blips required for satisfactory convergence leads to speedup of full-memory path integral calculations by many orders of magnitude.
Statistical steady states in turbulent droplet condensation
NASA Astrophysics Data System (ADS)
Bec, Jeremie; Krstulovic, Giorgio; Siewert, Christoph
2017-11-01
We investigate the general problem of turbulent condensation. Using direct numerical simulations we show that the fluctuations of the supersaturation field offer different conditions for the growth of droplets which evolve in time due to turbulent transport and mixing. This leads to propose a Lagrangian stochastic model consisting of a set of integro-differential equations for the joint evolution of the squared radius and the supersaturation along droplet trajectories. The model has two parameters fixed by the total amount of water and the thermodynamic properties, as well as the Lagrangian integral timescale of the turbulent supersaturation. The model reproduces very well the droplet size distributions obtained from direct numerical simulations and their time evolution. A noticeable result is that, after a stage where the squared radius simply diffuses, the system converges exponentially fast to a statistical steady state independent of the initial conditions. The main mechanism involved in this convergence is a loss of memory induced by a significant number of droplets undergoing a complete evaporation before growing again. The statistical steady state is characterised by an exponential tail in the droplet mass distribution.
Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan
2016-12-01
In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Nie, Xiaobing; Cao, Jinde
2011-11-01
In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
One hundred and fifty years of sprint and distance running – Past trends and future prospects
Weiss, Martin; Newman, Alexandra; Whitmore, Ceri; Weiss, Stephan
2016-01-01
Abstract Sprint and distance running have experienced remarkable performance improvements over the past century. Attempts to forecast running performances share an almost similarly long history but have relied so far on relatively short data series. Here, we compile a comprehensive set of season-best performances for eight Olympically contested running events. With this data set, we conduct (1) an exponential time series analysis and (2) a power-law experience curve analysis to quantify the rate of past performance improvements and to forecast future performances until the year 2100. We find that the sprint and distance running performances of women and men improve exponentially with time and converge at yearly rates of 4% ± 3% and 2% ± 2%, respectively, towards their asymptotic limits. Running performances can also be modelled with the experience curve approach, yielding learning rates of 3% ± 1% and 6% ± 2% for the women's and men's events, respectively. Long-term trends suggest that: (1) women will continue to run 10–20% slower than men, (2) 9.50 s over 100 m dash may only be broken at the end of this century and (3) several middle- and long-distance records may be broken within the next two to three decades. The prospects of witnessing a sub-2 hour marathon before 2100 remain inconclusive. Our results should be interpreted cautiously as forecasting human behaviour is intrinsically uncertain. The future season-best sprint and distance running performances will continue to scatter around the trends identified here and may yield unexpected improvements of standing world records. PMID:26088705
Viète's Formula and an Error Bound without Taylor's Theorem
ERIC Educational Resources Information Center
Boucher, Chris
2018-01-01
This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.
Lambert, Amaury
2011-07-01
We consider a general, neutral, dynamical model of biodiversity. Individuals have i.i.d. lifetime durations, which are not necessarily exponentially distributed, and each individual gives birth independently at constant rate λ. Thus, the population size is a homogeneous, binary Crump-Mode-Jagers process (which is not necessarily a Markov process). We assume that types are clonally inherited. We consider two classes of speciation models in this setting. In the immigration model, new individuals of an entirely new species singly enter the population at constant rate μ (e.g., from the mainland into the island). In the mutation model, each individual independently experiences point mutations in its germ line, at constant rate θ. We are interested in the species abundance distribution, i.e., in the numbers, denoted I(n)(k) in the immigration model and A(n)(k) in the mutation model, of species represented by k individuals, k = 1, 2, . . . , n, when there are n individuals in the total population. In the immigration model, we prove that the numbers (I(t)(k); k ≥ 1) of species represented by k individuals at time t, are independent Poisson variables with parameters as in Fisher's log-series. When conditioning on the total size of the population to equal n, this results in species abundance distributions given by Ewens' sampling formula. In particular, I(n)(k) converges as n → ∞ to a Poisson r.v. with mean γ/k, where γ : = μ/λ. In the mutation model, as n → ∞, we obtain the almost sure convergence of n (-1) A(n)(k) to a nonrandom explicit constant. In the case of a critical, linear birth-death process, this constant is given by Fisher's log-series, namely n(-1) A(n)(k) converges to α(k)/k, where α : = λ/(λ + θ). In both models, the abundances of the most abundant species are briefly discussed.
Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
NASA Astrophysics Data System (ADS)
Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu
2017-10-01
Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.
Effects of Light and Temperature on Fatty Acid Production in Nannochloropsis Salina
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Wagenen, Jonathan M.; Miller, Tyler W.; Hobbs, Samuel J.
2012-03-12
Accurate prediction of algal biofuel yield will require empirical determination of physiological responses to the climate, particularly light and temperature. One strain of interest, Nannochloropsis salina, was subjected to ranges of light intensity (5-850 {mu}mol m{sup -2} s{sup -1}) and temperature (13-40 C); exponential growth rate, total fatty acids (TFA) and fatty acid composition were measured. The maximum acclimated growth rate was 1.3 day{sup -1} at 23 C and 250 {mu}mol m{sup -2} s{sup -1}. Fatty acids were detected by gas chromatography with flame ionization detection (GC-FID) after transesterification to corresponding fatty acid methyl esters (FAME). A sharp increase inmore » TFA containing elevated palmitic acid (C16:0) and palmitoleic acid (C16:1) during exponential growth at high light was observed, indicating likely triacylglycerol accumulation due to photo-oxidative stress. Lower light resulted in increases in the relative abundance of unsaturated fatty acids; in thin cultures, increases were observed in palmitoleic and eicosapentaenoeic acids (C20:5{omega}3). As cultures aged and the effective light intensity per cell converged to very low levels, fatty acid profiles became more similar and there was a notable increase of oleic acid (C18:1{omega}9). The amount of unsaturated fatty acids was inversely proportional to temperature, demonstrating physiological adaptations to increase membrane fluidity. This data will improve prediction of fatty acid characteristics and yields relevant to biofuel production.« less
Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas
2017-04-01
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
A snapshot attractor view of the advection of inertial particles in the presence of history force
NASA Astrophysics Data System (ADS)
Guseva, Ksenia; Daitche, Anton; Tél, Tamás
2017-06-01
We analyse the effect of the Basset history force on the sedimentation or rising of inertial particles in a two-dimensional convection flow. We find that the concept of snapshot attractors is useful to understand the extraordinary slow convergence due to long-term memory: an ensemble of particles converges exponentially fast towards a snapshot attractor, and this attractor undergoes a slow drift for long times. We demonstrate for the case of a periodic attractor that the drift of the snapshot attractor can be well characterized both in the space of the fluid and in the velocity space. For the case of quasiperiodic and chaotic dynamics we propose the use of the average settling velocity of the ensemble as a distinctive measure to characterize the snapshot attractor and the time scale separation corresponding to the convergence towards the snapshot attractor and its own slow dynamics.
A reward optimization method based on action subrewards in hierarchical reinforcement learning.
Fu, Yuchen; Liu, Quan; Ling, Xionghong; Cui, Zhiming
2014-01-01
Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.
Modelling and finite-time stability analysis of psoriasis pathogenesis
NASA Astrophysics Data System (ADS)
Oza, Harshal B.; Pandey, Rakesh; Roper, Daniel; Al-Nuaimi, Yusur; Spurgeon, Sarah K.; Goodfellow, Marc
2017-08-01
A new systems model of psoriasis is presented and analysed from the perspective of control theory. Cytokines are treated as actuators to the plant model that govern the cell population under the reasonable assumption that cytokine dynamics are faster than the cell population dynamics. The analysis of various equilibria is undertaken based on singular perturbation theory. Finite-time stability and stabilisation have been studied in various engineering applications where the principal paradigm uses non-Lipschitz functions of the states. A comprehensive study of the finite-time stability properties of the proposed psoriasis dynamics is carried out. It is demonstrated that the dynamics are finite-time convergent to certain equilibrium points rather than asymptotically or exponentially convergent. This feature of finite-time convergence motivates the development of a modified version of the Michaelis-Menten function, frequently used in biology. This framework is used to model cytokines as fast finite-time actuators.
Quantum-Inspired Multidirectional Associative Memory With a Self-Convergent Iterative Learning.
Masuyama, Naoki; Loo, Chu Kiong; Seera, Manjeevan; Kubota, Naoyuki
2018-04-01
Quantum-inspired computing is an emerging research area, which has significantly improved the capabilities of conventional algorithms. In general, quantum-inspired hopfield associative memory (QHAM) has demonstrated quantum information processing in neural structures. This has resulted in an exponential increase in storage capacity while explaining the extensive memory, and it has the potential to illustrate the dynamics of neurons in the human brain when viewed from quantum mechanics perspective although the application of QHAM is limited as an autoassociation. We introduce a quantum-inspired multidirectional associative memory (QMAM) with a one-shot learning model, and QMAM with a self-convergent iterative learning model (IQMAM) based on QHAM in this paper. The self-convergent iterative learning enables the network to progressively develop a resonance state, from inputs to outputs. The simulation experiments demonstrate the advantages of QMAM and IQMAM, especially the stability to recall reliability.
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
Matrix-valued Boltzmann equation for the nonintegrable Hubbard chain.
Fürst, Martin L R; Mendl, Christian B; Spohn, Herbert
2013-07-01
The standard Fermi-Hubbard chain becomes nonintegrable by adding to the nearest neighbor hopping additional longer range hopping amplitudes. We assume that the quartic interaction is weak and investigate numerically the dynamics of the chain on the level of the Boltzmann type kinetic equation. Only the spatially homogeneous case is considered. We observe that the huge degeneracy of stationary states in the case of nearest neighbor hopping is lost and the convergence to the thermal Fermi-Dirac distribution is restored. The convergence to equilibrium is exponentially fast. However for small next-nearest neighbor hopping amplitudes one has a rapid relaxation towards the manifold of quasistationary states and slow relaxation to the final equilibrium state.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less
Almost periodic solutions to difference equations
NASA Technical Reports Server (NTRS)
Bayliss, A.
1975-01-01
The theory of Massera and Schaeffer relating the existence of unique almost periodic solutions of an inhomogeneous linear equation to an exponential dichotomy for the homogeneous equation was completely extended to discretizations by a strongly stable difference scheme. In addition it is shown that the almost periodic sequence solution will converge to the differential equation solution. The preceding theory was applied to a class of exponentially stable partial differential equations to which one can apply the Hille-Yoshida theorem. It is possible to prove the existence of unique almost periodic solutions of the inhomogeneous equation (which can be approximated by almost periodic sequences) which are the solutions to appropriate discretizations. Two methods of discretizations are discussed: the strongly stable scheme and the Lax-Wendroff scheme.
Convergence of the Light-Front Coupled-Cluster Method in Scalar Yukawa Theory
NASA Astrophysics Data System (ADS)
Usselman, Austin
We use Fock-state expansions and the Light-Front Coupled-Cluster (LFCC) method to study mass eigenvalue problems in quantum field theory. Specifically, we study convergence of the method in scalar Yukawa theory. In this theory, a single charged particle is surrounded by a cloud of neutral particles. The charged particle can create or annihilate neutral particles, causing the n-particle state to depend on the n + 1 and n - 1-particle state. Fock state expansion leads to an infinite set of coupled equations where truncation is required. The wave functions for the particle states are expanded in a basis of symmetric polynomials and a generalized eigenvalue problem is solved for the mass eigenvalue. The mass eigenvalue problem is solved for multiple values for the coupling strength while the number of particle states and polynomial basis order are increased. Convergence of the mass eigenvalue solutions is then obtained. Three mass ratios between the charged particle and neutral particles were studied. This includes a massive charged particle, equal masses and massive neutral particles. Relative probability between states can also be explored for more detailed understanding of the process of convergence with respect to the number of Fock sectors. The reliance on higher order particle states depended on how large the mass of the charge particle was. The higher the mass of the charged particle, the more the system depended on higher order particle states. The LFCC method solves this same mass eigenvalue problem using an exponential operator. This exponential operator can then be truncated instead to form a finite system of equations that can be solved using a built in system solver provided in most computational environments, such as MatLab and Mathematica. First approximation in the LFCC method allows for only one particle to be created by the new operator and proved to be not powerful enough to match the Fock state expansion. The second order approximation allowed one and two particles to be created by the new operator and converged to the Fock state expansion results. This showed the LFCC method to be a reliable replacement method for solving quantum field theory problems.
NASA Astrophysics Data System (ADS)
Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael
2017-07-01
Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
NASA Technical Reports Server (NTRS)
Hu, Fang Q.
1994-01-01
It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Global Well-posedness of the Spatially Homogeneous Kolmogorov-Vicsek Model as a Gradient Flow
NASA Astrophysics Data System (ADS)
Figalli, Alessio; Kang, Moon-Jin; Morales, Javier
2018-03-01
We consider the so-called spatially homogenous Kolmogorov-Vicsek model, a non-linear Fokker-Planck equation of self-driven stochastic particles with orientation interaction under the space-homogeneity. We prove the global existence and uniqueness of weak solutions to the equation. We also show that weak solutions exponentially converge to a steady state, which has the form of the Fisher-von Mises distribution.
NASA Astrophysics Data System (ADS)
Dong, Siqun; Zhao, Dianli
2018-01-01
This paper studies the subcritical, near-critical and supercritical asymptotic behavior of a reversible random coagulation-fragmentation polymerization process as N → ∞, with the number of distinct ways to form a k-clusters from k units satisfying f(k) =(1 + o (1)) cr-ke-kαk-β, where 0 < α < 1 and β > 0. When the cluster size is small, its distribution is proved to converge to the Gaussian distribution. For the medium clusters, its distribution will converge to Poisson distribution in supercritical stage, and no large clusters exist in this stage. Furthermore, the largest length of polymers of size N is of order ln N in the subcritical stage under α ⩽ 1 / 2.
NASA Technical Reports Server (NTRS)
Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.
1981-01-01
Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.
Reformulation of Possio's kernel with application to unsteady wind tunnel interference
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1980-01-01
An efficient method for computing the Possio kernel has remained elusive up to the present time. In this paper the Possio is reformulated so that it can be computed accurately using existing high precision numerical quadrature techniques. Convergence to the correct values is demonstrated and optimization of the integration procedures is discussed. Since more general kernels such as those associated with unsteady flows in ventilated wind tunnels are analytic perturbations of the Possio free air kernel, a more accurate evaluation of their collocation matrices results with an exponential improvement in convergence. An application to predicting frequency response of an airfoil-trailing edge control system in a wind tunnel compared with that in free air is given showing strong interference effects.
Network architecture in a converged optical + IP network
NASA Astrophysics Data System (ADS)
Wakim, Walid; Zottmann, Harald
2012-01-01
As demands on Provider Networks continue to grow at exponential rates, providers are forced to evaluate how to continue to grow the network while increasing service velocity, enhancing resiliency while decreasing the total cost of ownership (TCO). The bandwidth growth that networks are experiencing is in the form packet based multimedia services such as video, video conferencing, gaming, etc... mixed with Over the Top (OTT) content providers such as Netflix, and the customer's expectations that best effort is not enough you end up with a situation that forces the provider to analyze how to gain more out of the network with less cost. In this paper we will discuss changes in the network that are driving us to a tighter integration between packet and optical layers and how to improve on today's multi - layer inefficiencies to drive down network TCO and provide for a fully integrated and dynamic network that will decrease time to revenue.
NASA Astrophysics Data System (ADS)
Farhat, Aseel; Lunasin, Evelyn; Titi, Edriss S.
2017-06-01
In this paper we propose a continuous data assimilation (downscaling) algorithm for a two-dimensional Bénard convection problem. Specifically we consider the two-dimensional Boussinesq system of a layer of incompressible fluid between two solid horizontal walls, with no-normal flow and stress-free boundary conditions on the walls, and the fluid is heated from the bottom and cooled from the top. In this algorithm, we incorporate the observables as a feedback (nudging) term in the evolution equation of the horizontal velocity. We show that under an appropriate choice of the nudging parameter and the size of the spatial coarse mesh observables, and under the assumption that the observed data are error free, the solution of the proposed algorithm converges at an exponential rate, asymptotically in time, to the unique exact unknown reference solution of the original system, associated with the observed data on the horizontal component of the velocity.
Game Design and Analysis for Price-Based Demand Response: An Aggregate Game Approach.
Ye, Maojiao; Hu, Guoqiang
2016-02-18
In this paper, an aggregate game is adopted for the modeling and analysis of energy consumption control in smart grid. Since the electricity users' cost functions depend on the aggregate energy consumption, which is unknown to the end users, an average consensus protocol is employed to estimate it. By neighboring communication among the users about their estimations on the aggregate energy consumption, Nash seeking strategies are developed. Convergence properties are explored for the proposed Nash seeking strategies. For energy consumption game that may have multiple isolated Nash equilibria, a local convergence result is derived. The convergence is established by utilizing singular perturbation analysis and Lyapunov stability analysis. Energy consumption control for a network of heating, ventilation, and air conditioning systems is investigated. Based on the uniqueness of the Nash equilibrium, it is shown that the players' actions converge to a neighborhood of the unique Nash equilibrium nonlocally. More specially, if the unique Nash equilibrium is an inner Nash equilibrium, an exponential convergence result is obtained. Energy consumption game with stubborn players is studied. In this case, the actions of the rational players can be driven to a neighborhood of their best response strategies by using the proposed method. Numerical examples are presented to verify the effectiveness of the proposed methods.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
A pheromone-rate-based analysis on the convergence time of ACO algorithm.
Huang, Han; Wu, Chun-Guo; Hao, Zhi-Feng
2009-08-01
Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
Gradient-based stochastic estimation of the density matrix
NASA Astrophysics Data System (ADS)
Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton
2018-03-01
Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Sam R.; Barack, Leor
2011-01-15
To model the radiative evolution of extreme mass-ratio binary inspirals (a key target of the LISA mission), the community needs efficient methods for computation of the gravitational self-force (SF) on the Kerr spacetime. Here we further develop a practical 'm-mode regularization' scheme for SF calculations, and give the details of a first implementation. The key steps in the method are (i) removal of a singular part of the perturbation field with a suitable 'puncture' to leave a sufficiently regular residual within a finite worldtube surrounding the particle's worldline, (ii) decomposition in azimuthal (m) modes, (iii) numerical evolution of the mmore » modes in 2+1D with a finite-difference scheme, and (iv) reconstruction of the SF from the mode sum. The method relies on a judicious choice of puncture, based on the Detweiler-Whiting decomposition. We give a working definition for the ''order'' of the puncture, and show how it determines the convergence rate of the m-mode sum. The dissipative piece of the SF displays an exponentially convergent mode sum, while the m-mode sum for the conservative piece converges with a power law. In the latter case, the individual modal contributions fall off at large m as m{sup -n} for even n and as m{sup -n+1} for odd n, where n is the puncture order. We describe an m-mode implementation with a 4th-order puncture to compute the scalar-field SF along circular geodesics on Schwarzschild. In a forthcoming companion paper we extend the calculation to the Kerr spacetime.« less
Beyer, Hans-Georg
2014-01-01
The convergence behaviors of so-called natural evolution strategies (NES) and of the information-geometric optimization (IGO) approach are considered. After a review of the NES/IGO ideas, which are based on information geometry, the implications of this philosophy w.r.t. optimization dynamics are investigated considering the optimization performance on the class of positive quadratic objective functions (the ellipsoid model). Exact differential equations describing the approach to the optimizer are derived and solved. It is rigorously shown that the original NES philosophy optimizing the expected value of the objective functions leads to very slow (i.e., sublinear) convergence toward the optimizer. This is the real reason why state of the art implementations of IGO algorithms optimize the expected value of transformed objective functions, for example, by utility functions based on ranking. It is shown that these utility functions are localized fitness functions that change during the IGO flow. The governing differential equations describing this flow are derived. In the case of convergence, the solutions to these equations exhibit an exponentially fast approach to the optimizer (i.e., linear convergence order). Furthermore, it is proven that the IGO philosophy leads to an adaptation of the covariance matrix that equals in the asymptotic limit-up to a scalar factor-the inverse of the Hessian of the objective function considered.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
A Hybrid Algorithm for Non-negative Matrix Factorization Based on Symmetric Information Divergence
Devarajan, Karthik; Ebrahimi, Nader; Soofi, Ehsan
2017-01-01
The objective of this paper is to provide a hybrid algorithm for non-negative matrix factorization based on a symmetric version of Kullback-Leibler divergence, known as intrinsic information. The convergence of the proposed algorithm is shown for several members of the exponential family such as the Gaussian, Poisson, gamma and inverse Gaussian models. The speed of this algorithm is examined and its usefulness is illustrated through some applied problems. PMID:28868206
Sparse Recovery via Differential Inclusions
2014-07-01
2242. [Wai09] Martin J. Wainwright, Sharp thresholds for high-dimensional and noisy spar- sity recovery using l1 -constrained quadratic programming...solution, (1.11) βt = { 0, if t < 1/y; y(1− e−κ(t−1/y)), otherwise, which converges to the unbiased Bregman ISS estimator exponentially fast. Let us ...are not given the support set S, so the following two prop- erties are used to evaluate the performance of an estimator β̂. 1. Model selection
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems
NASA Astrophysics Data System (ADS)
Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.
2010-09-01
We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.
NASA Astrophysics Data System (ADS)
Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua
2016-11-01
Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.
A method to calculate synthetic waveforms in stratified VTI media
NASA Astrophysics Data System (ADS)
Wang, W.; Wen, L.
2012-12-01
Transverse isotropy with a vertical axis of symmetry (VTI) may be an important material property in the Earth's interior. In this presentation, we develop a method to calculate synthetic seismograms for wave propagation in stratified VTI media. Our method is based on the generalized reflection and transmission method (GRTM) (Luco & Apsel 1983). We extend it to transversely isotropic VTI media. GRTM has the advantage of remaining stable in high frequency calculations compared to the Haskell Matrix method (Haskell 1964), which explicitly excludes the exponential growth terms in the propagation matrix and is limited to low frequency computation. In the implementation, we also improve GRTM in two aspects. 1) We apply the Shanks transformation (Bender & Orszag 1999) to improve the convergence rate of convergence. This improvement is especially important when the depths of source and receiver are close. 2) We adopt a self-adaptive Simpson integration method (Chen & Zhang 2001) in the discrete wavenumber integration so that the integration can still be efficiently carried out at large epicentral distances. Because the calculation is independent in each frequency, the program can also be effectively implemented in parallel computing. Our method provides a powerful tool to synthesize broadband seismograms of VIT media at a large epicenter distance range. We will present examples of using the method to study possible transverse isotropy in the upper mantle and the lowermost mantle.
Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul
2015-12-01
In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less
Sreenivasan, Vidhyapriya; Bobier, William R
2014-07-01
Convergence insufficiency (CI) is a developmental visual anomaly defined clinically by a reduced near point of convergence, a reduced capacity to view through base-out prisms (fusional convergence); coupled with asthenopic symptoms typically blur and diplopia. Experimental studies show reduced vergence parameters and tonic adaptation. Based upon current models of accommodation and vergence, we hypothesize that the reduced vergence adaptation in CI leads to excessive amounts of convergence accommodation (CA). Eleven CI participants (mean age=17.4±2.3 years) were recruited with reduced capacity to view through increasing magnitudes of base out (BO) prisms (mean fusional convergence at 40 cm=12±0.9Δ). Testing followed our previous experimental design for (n=11) binocularly normal adults. Binocular fixation of a difference of Gaussian (DoG) target (0.2 cpd) elicited CA responses during vergence adaptation to a 12Δ BO. Vergence and CA responses were obtained at 3 min intervals over a 15 min period and time course were quantified using exponential decay functions. Results were compared to previously published data on eleven binocular normals. Eight participants completed the study. CI's showed significantly reduced magnitude of vergence adaptation (CI: 2.9Δ vs. normals: 6.6Δ; p=0.01) and CA reduction (CI=0.21 D, Normals=0.55 D; p=0.03). However, the decay time constants for adaptation and CA responses were not significantly different. CA changes were not confounded by changes in tonic accommodation (Change in TA=0.01±0.2D; p=0.8). The reduced magnitude of vergence adaptation found in CI patients resulting in higher levels of CA may potentially explain their clinical findings of reduced positive fusional vergence (PFV) and the common symptom of blur. Copyright © 2014 Elsevier B.V. All rights reserved.
Laws, Holly B.; Constantino, Michael J.; Sayer, Aline G.; Klein, Daniel N.; Kocsis, James H.; Manber, Rachel; Markowitz, John C.; Rothbaum, Barbara O.; Steidtmann, Dana; Thase, Michael E.; Arnow, Bruce A.
2016-01-01
Objective This study tested whether discrepancy between patients' and therapists' ratings of the therapeutic alliance, as well as convergence in their alliance ratings over time, predicted outcome in chronic depression treatment. Method Data derived from a controlled trial of partial or non-responders to open-label pharmacotherapy subsequently randomized to 12 weeks of algorithm-driven pharmacotherapy alone or pharmacotherapy plus psychotherapy (Kocsis et al., 2009). The current study focused on the psychotherapy conditions (N = 357). Dyadic multilevel modeling was used to assess alliance discrepancy and alliance convergence over time as predictors of two depression measures: one pharmacotherapist-rated (Quick Inventory of Depressive Symptoms-Clinician; QIDS-C), the other blind interviewer-rated (Hamilton Rating Scale for Depression; HAMD). Results Patients' and therapists' alliance ratings became more similar, or convergent, over the course of psychotherapy. Higher alliance convergence was associated with greater reductions in QIDS-C depression across psychotherapy. Alliance convergence was not significantly associated with declines in HAMD depression; however, greater alliance convergence was related to lower HAMD scores at 3-month follow-up. Conclusions The results partially support the hypothesis that increasing patient-therapist consensus on alliance quality during psychotherapy may improve treatment and longer-term outcomes. PMID:26829714
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Improved numerical methods for infinite spin chains with long-range interactions
NASA Astrophysics Data System (ADS)
Nebendahl, V.; Dür, W.
2013-02-01
We present several improvements of the infinite matrix product state (iMPS) algorithm for finding ground states of one-dimensional quantum systems with long-range interactions. As a main ingredient, we introduce the superposed multioptimization method, which allows an efficient optimization of exponentially many MPS of different lengths at different sites all in one step. Here, the algorithm becomes protected against position-dependent effects as caused by spontaneously broken translational invariance. So far, these have been a major obstacle to convergence for the iMPS algorithm if no prior knowledge of the system's translational symmetry was accessible. Further, we investigate some more general methods to speed up calculations and improve convergence, which might be partially interesting in a much broader context, too. As a more special problem, we also look into translational invariant states close to an invariance-breaking phase transition and show how to avoid convergence into wrong local minima for such systems. Finally, we apply these methods to polar bosons with long-range interactions. We calculate several detailed Devil's staircases with the corresponding phase diagrams and investigate some supersolid properties.
Genetic attack on neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
Hamed, Kaveh Akbari; Gregg, Robert D
2017-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and [Formula: see text] robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially and robustly stabilize periodic orbits for hybrid dynamical systems against possible uncertainties in discrete-time phases. The algorithm assumes a family of parameterized and decentralized nonlinear controllers to coordinate interconnected hybrid subsystems based on a common phasing variable. The exponential and H2 robust stabilization problems of periodic orbits are translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities. By investigating the properties of the Poincaré map, some sufficient conditions for the convergence of the iterative algorithm are presented. The power of the algorithm is finally demonstrated through designing a set of robust stabilizing local nonlinear controllers for walking of an underactuated 3D autonomous bipedal robot with 9 degrees of freedom, impact model uncertainties, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:28959117
Genetic attack on neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka
2006-03-15
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less
Genetic attack on neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
The convergence rate of approximate solutions for nonlinear scalar conservation laws
NASA Technical Reports Server (NTRS)
Nessyahu, Haim; Tadmor, Eitan
1991-01-01
The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.
Olsson, Martin A; Söderhjelm, Pär; Ryde, Ulf
2016-06-30
In this article, the convergence of quantum mechanical (QM) free-energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa-acid deep-cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158-224 atoms). We use single-step exponential averaging (ssEA) and the non-Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi-empirical PM6-DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free-energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Olsson, Martin A.; Söderhjelm, Pär
2016-01-01
In this article, the convergence of quantum mechanical (QM) free‐energy simulations based on molecular dynamics simulations at the molecular mechanics (MM) level has been investigated. We have estimated relative free energies for the binding of nine cyclic carboxylate ligands to the octa‐acid deep‐cavity host, including the host, the ligand, and all water molecules within 4.5 Å of the ligand in the QM calculations (158–224 atoms). We use single‐step exponential averaging (ssEA) and the non‐Boltzmann Bennett acceptance ratio (NBB) methods to estimate QM/MM free energy with the semi‐empirical PM6‐DH2X method, both based on interaction energies. We show that ssEA with cumulant expansion gives a better convergence and uses half as many QM calculations as NBB, although the two methods give consistent results. With 720,000 QM calculations per transformation, QM/MM free‐energy estimates with a precision of 1 kJ/mol can be obtained for all eight relative energies with ssEA, showing that this approach can be used to calculate converged QM/MM binding free energies for realistic systems and large QM partitions. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc. PMID:27117350
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Spectral Cauchy Characteristic Extraction: Gravitational Waves and Gauge Free News
NASA Astrophysics Data System (ADS)
Handmer, Casey; Szilagyi, Bela; Winicour, Jeff
2015-04-01
We present a fast, accurate spectral algorithm for the characteristic evolution of the full non-linear vacuum Einstein field equations in the Bondi framework. Developed within the Spectral Einstein Code (SpEC), we demonstrate how spectral Cauchy characteristic extraction produces gravitational News without confounding gauge effects. We explain several numerical innovations and demonstrate speed, stability, accuracy, exponential convergence, and consistency with existing methods. We highlight its capability to deliver physical insights in the study of black hole binaries.
The Hyperfine Structure of the Ground State in the Muonic Helium Atoms
NASA Astrophysics Data System (ADS)
Aznabayev, D. T.; Bekbaev, A. K.; Korobov, V. I.
2018-05-01
Non-relativistic ionization energies 3He2+μ-e- and 4He2+μ-e- of helium-muonic atoms are calculated for ground states. The calculations are based on the variational method of the exponential expansion. Convergence of the variational energies is studied by an increasing of a number of the basis functions N. This allows to claim that the obtained energy values have 26 significant digits for ground states. With the obtained results we calculate hyperfine splitting of the muonic helium atoms.
Convergence of a Queueing System in Heavy Traffic with General Abandonment Distributions
2010-10-08
3 in Reiman [27]. We circumvent the use of Reiman’s “Snap-shot Principle” and a comparison result with a non-abandoning queue used in Reed and Ward...4):2606–2650, 2005. 37 [5] R. Atar, A. Mandelbaum, and M. I. Reiman . Scheduling a multi class queue with many exponential servers: asymptotic... Reiman Designing a call center with impatient cus- tomers. Manufacturing and Service Oper. Management, 4(1A):208–227, 2002. [15] J. M. George and J. M
Quantifying Hydrogen Bond Cooperativity in Water: VRT Spectroscopy of the Water Tetramer
NASA Astrophysics Data System (ADS)
Cruzan, J. D.; Braly, L. B.; Liu, Kun; Brown, M. G.; Loeser, J. G.; Saykally, R. J.
1996-01-01
Measurement of the far-infrared vibration-rotation tunneling spectrum of the perdeuterated water tetramer is described. Precisely determined rotational constants and relative intensity measurements indicate a cyclic quasi-planar minimum energy structure, which is in agreement with recent ab initio calculations. The O-O separation deduced from the data indicates a rapid exponential convergence to the ordered bulk value with increasing cluster size. Observed quantum tunneling splittings are interpreted in terms of hydrogen bond rearrangements connecting two degenerate structures.
An extension of the finite cell method using boolean operations
NASA Astrophysics Data System (ADS)
Abedian, Alireza; Düster, Alexander
2017-05-01
In the finite cell method, the fictitious domain approach is combined with high-order finite elements. The geometry of the problem is taken into account by integrating the finite cell formulation over the physical domain to obtain the corresponding stiffness matrix and load vector. In this contribution, an extension of the FCM is presented wherein both the physical and fictitious domain of an element are simultaneously evaluated during the integration. In the proposed extension of the finite cell method, the contribution of the stiffness matrix over the fictitious domain is subtracted from the cell, resulting in the desired stiffness matrix which reflects the contribution of the physical domain only. This method results in an exponential rate of convergence for porous domain problems with a smooth solution and accurate integration. In addition, it reduces the computational cost, especially when applying adaptive integration schemes based on the quadtree/octree. Based on 2D and 3D problems of linear elastostatics, numerical examples serve to demonstrate the efficiency and accuracy of the proposed method.
Recruitment and establishment of the gut microbiome in arctic shorebirds.
Grond, Kirsten; Lanctot, Richard B; Jumpponen, Ari; Sandercock, Brett K
2017-12-01
Gut microbiota play a key role in host health. Mammals acquire gut microbiota during birth, but timing of gut microbial recruitment in birds is unknown. We evaluated whether precocial chicks from three species of arctic-breeding shorebirds acquire gut microbiota before or after hatching, and then documented the rate and compositional dynamics of accumulation of gut microbiota. Contrary to earlier reports of microbial recruitment before hatching in chickens, quantitative PCR and Illumina sequence data indicated negligible microbiota in the guts of shorebird embryos before hatching. Analyses of chick feces indicated an exponential increase in bacterial abundance of guts 0-2 days post-hatch, followed by stabilization. Gut communities were characterized by stochastic recruitment and convergence towards a community dominated by Clostridia and Gammaproteobacteria. We conclude that guts of shorebird chicks are likely void of microbiota prior to hatch, but that stable gut microbiome establishes as early as 3 days of age, probably from environmental inocula. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
NASA Astrophysics Data System (ADS)
Friesen, Martin; Kondratiev, Yuri
2018-06-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R}^d. Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^-, ɛ > 0. Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+, where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^-. We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^-. Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^-. In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on {Γ}^+ associated with the averaged Markov birth-and-death operator {\\overline{L}} = \\int _{Γ}^- L^+(γ ^-)d μ _{inv}(γ ^-).
Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum
NASA Astrophysics Data System (ADS)
Friesen, Martin; Kondratiev, Yuri
2018-04-01
We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R^d . Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^- , ɛ > 0 . Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+ , where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^- . We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^- . Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^- . In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on Γ^+ associated with the averaged Markov birth-and-death operator \\overline{L} = \\int _{Γ}^-}L^+(γ ^-)d μ _{inv}(γ ^-).
KAM tori and whiskered invariant tori for non-autonomous systems
NASA Astrophysics Data System (ADS)
Canadell, Marta; de la Llave, Rafael
2015-08-01
We consider non-autonomous dynamical systems which converge to autonomous (or periodic) systems exponentially fast in time. Such systems appear naturally as models of many physical processes affected by external pulses. We introduce definitions of non-autonomous invariant tori and non-autonomous whiskered tori and their invariant manifolds and we prove their persistence under small perturbations, smooth dependence on parameters and several geometric properties (if the systems are Hamiltonian, the tori are Lagrangian manifolds). We note that such definitions are problematic for general time-dependent systems, but we show that they are unambiguous for systems converging exponentially fast to autonomous. The proof of persistence relies only on a standard Implicit Function Theorem in Banach spaces and it does not require that the rotations in the tori are Diophantine nor that the systems we consider preserve any geometric structure. We only require that the autonomous system preserves these objects. In particular, when the autonomous system is integrable, we obtain the persistence of tori with rational rotational. We also discuss fast and efficient algorithms for their computation. The method also applies to infinite dimensional systems which define a good evolution, e.g. PDE's. When the systems considered are Hamiltonian, we show that the time dependent invariant tori are isotropic. Hence, the invariant tori of maximal dimension are Lagrangian manifolds. We also obtain that the (un)stable manifolds of whiskered tori are Lagrangian manifolds. We also include a comparison with the more global theory developed in Blazevski and de la Llave (2011).
Near-optimal matrix recovery from random linear measurements.
Romanov, Elad; Gavish, Matan
2018-06-25
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers. Copyright © 2018 the Author(s). Published by PNAS.
Suicide rates in European OECD nations converged during the period 1990-2010.
Bremberg, Sven G
2017-05-01
The aim of this study was to investigate, with multiple regression analyses, the effect of selected characteristics on the rate of decrease of suicide rates in 21 OECD (Organisation for Economic Co-operation and Development) nations over the period 1990-2010, with initial levels of suicide rates taken into account. The rate of decrease seems mainly (83%) to be determined by the initial suicide rates in 1990. In nations with relatively high initial rates, the rates decreased faster. The suicide rates also converged. The study indicates that beta convergence alone explained most of the cross-national variations.
A robust nonlinear position observer for synchronous motors with relaxed excitation conditions
NASA Astrophysics Data System (ADS)
Bobtsov, Alexey; Bazylev, Dmitry; Pyrkin, Anton; Aranovskiy, Stanislav; Ortega, Romeo
2017-04-01
A robust, nonlinear and globally convergent rotor position observer for surface-mounted permanent magnet synchronous motors was recently proposed by the authors. The key feature of this observer is that it requires only the knowledge of the motor's resistance and inductance. Using some particular properties of the mathematical model it is shown that the problem of state observation can be translated into one of estimation of two constant parameters, which is carried out with a standard gradient algorithm. In this work, we propose to replace this estimator with a new one called dynamic regressor extension and mixing, which has the following advantages with respect to gradient estimators: (1) the stringent persistence of excitation (PE) condition of the regressor is not necessary to ensure parameter convergence; (2) the latter is guaranteed requiring instead a non-square-integrability condition that has a clear physical meaning in terms of signal energy; (3) if the regressor is PE, the new observer (like the old one) ensures convergence is exponential, entailing some robustness properties to the observer; (4) the new estimator includes an additional filter that constitutes an additional degree of freedom to satisfy the non-square integrability condition. Realistic simulation results show significant performance improvement of the position observer using the new parameter estimator, with a less oscillatory behaviour and a faster convergence speed.
Memory behaviors of entropy production rates in heat conduction
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2018-02-01
Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.
O (a) improvement of 2D N = (2 , 2) lattice SYM theory
NASA Astrophysics Data System (ADS)
Hanada, Masanori; Kadoh, Daisuke; Matsuura, So; Sugino, Fumihiko
2018-04-01
We perform a tree-level O (a) improvement of two-dimensional N = (2 , 2) supersymmetric Yang-Mills theory on the lattice, motivated by the fast convergence in numerical simulations. The improvement respects an exact supersymmetry Q which is needed for obtaining the correct continuum limit without a parameter fine tuning. The improved lattice action is given within a milder locality condition in which the interactions are decaying as the exponential of the distance on the lattice. We also prove that the path-integral measure is invariant under the improved Q-transformation.
A coherent discrete variable representation method on a sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Hua -Gen
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
A coherent discrete variable representation method on a sphere
Yu, Hua -Gen
2017-09-05
Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.
Simulation of nonlinear convective thixotropic liquid with Cattaneo-Christov heat flux
NASA Astrophysics Data System (ADS)
Zubair, M.; Waqas, M.; Hayat, T.; Ayub, M.; Alsaedi, A.
2018-03-01
In this communication we utilized a modified Fourier approach featuring thermal relaxation effect in nonlinear convective flow by a vertical exponentially stretchable surface. Temperature-dependent thermal conductivity describes the heat transfer process. Thixotropic liquid is modeled. Convergent local similar solutions by homotopic approach are obtained. Graphical results for emerging parameters of interest are analyzed. Skin friction is calculated and interpreted. Consideration of larger local buoyancy and nonlinear convection parameters yields an enhancement in velocity distribution. Temperature and thermal layer thickness are reduced for larger thermal relaxation factor.
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments
NASA Astrophysics Data System (ADS)
Makri, Nancy
2017-04-01
The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.
Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher
2013-10-01
This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
NASA Astrophysics Data System (ADS)
Dhariwal, Rohit; Bragg, Andrew D.
2018-03-01
In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
Yuan, Peipei; Cao, Weijia; Wang, Zhen; Chen, Kequan; Li, Yan; Ouyang, Pingkai
2015-07-01
Nitrogen source optimization combined with phased exponential L-tyrosine feeding was employed to enhance L-phenylalanine production by a tyrosine-auxotroph strain, Escherichia coli YP1617. The absence of (NH4)2SO4, the use of corn steep powder and yeast extract as composite organic nitrogen source were more suitable for cell growth and L-phenylalanine production. Moreover, the optimal initial L-tyrosine level was 0.3 g L(-1) and exponential L-tyrosine feeding slightly improved L-phenylalanine production. Nerveless, L-phenylalanine production was greatly enhanced by a strategy of phased exponential L-tyrosine feeding, where exponential feeding was started at the set specific growth rate of 0.08, 0.05, and 0.02 h(-1) after 12, 32, and 52 h, respectively. Compared with exponential L-tyrosine feeding at the set specific growth rate of 0.08 h(-1), the developed strategy obtained a 15.33% increase in L-phenylalanine production (L-phenylalanine of 56.20 g L(-1)) and a 45.28% decrease in L-tyrosine supplementation. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Biological electric fields and rate equations for biophotons.
Alvermann, M; Srivastava, Y N; Swain, J; Widom, A
2015-04-01
Biophoton intensities depend upon the squared modulus of the electric field. Hence, we first make some general estimates about the inherent electric fields within various biosystems. Generally, these intensities do not follow a simple exponential decay law. After a brief discussion on the inapplicability of a linear rate equation that leads to strict exponential decay, we study other, nonlinear rate equations that have been successfully used for biosystems along with their physical origins when available.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Work fluctuations for Bose particles in grand canonical initial states.
Yi, Juyeon; Kim, Yong Woon; Talkner, Peter
2012-05-01
We consider bosons in a harmonic trap and investigate the fluctuations of the work performed by an adiabatic change of the trap curvature. Depending on the reservoir conditions such as temperature and chemical potential that provide the initial equilibrium state, the exponentiated work average (EWA) defined in the context of the Crooks relation and the Jarzynski equality may diverge if the trap becomes wider. We investigate how the probability distribution function (PDF) of the work signals this divergence. It is shown that at low temperatures the PDF is highly asymmetric with a steep fall-off at one side and an exponential tail at the other side. For high temperatures it is closer to a symmetric distribution approaching a Gaussian form. These properties of the work PDF are discussed in relation to the convergence of the EWA and to the existence of the hypothetical equilibrium state to which those thermodynamic potential changes refer that enter both the Crooks relation and the Jarzynski equality.
Optimal savings and the value of population.
Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P
2007-11-20
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.
Wang, Dongshu; Huang, Lihong
2014-03-01
In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimal savings and the value of population
Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.
2007-01-01
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059
Dynamics of the quantum search and quench-induced first-order phase transitions.
Coulamy, Ivan B; Saguia, Andreia; Sarandy, Marcelo S
2017-02-01
We investigate the excitation dynamics at a first-order quantum phase transition (QPT). More specifically, we consider the quench-induced QPT in the quantum search algorithm, which aims at finding out a marked element in an unstructured list. We begin by deriving the exact dynamics of the model, which is shown to obey a Riccati differential equation. Then, we discuss the probabilities of success by adopting either global or local adiabaticity strategies. Moreover, we determine the disturbance of the quantum criticality as a function of the system size. In particular, we show that the critical point exponentially converges to its thermodynamic limit even in a fast evolution regime, which is characterized by both entanglement QPT estimators and the Schmidt gap. The excitation pattern is manifested in terms of quantum domain walls separated by kinks. The kink density is then shown to follow an exponential scaling as a function of the evolution speed, which can be interpreted as a Kibble-Zurek mechanism for first-order QPTs.
A simplified analysis of the multigrid V-cycle as a fast elliptic solver
NASA Technical Reports Server (NTRS)
Decker, Naomi H.; Taasan, Shlomo
1988-01-01
For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Effects of heterogeneous convergence rate on consensus in opinion dynamics
NASA Astrophysics Data System (ADS)
Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong
2018-06-01
The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.
Effects of proliferation on the decay of thermotolerance in Chinese hamster cells.
Armour, E P; Li, G C; Hahn, G M
1985-09-01
Development and decay of thermotolerance were observed in Chinese hamster HA-1 cells. The thermotolerance kinetics of exponentially growing and fed plateau-phase cells were compared. Following a 10-min heat exposure at 45 degrees C, cells in both growth states had similar rates of development of tolerance to a subsequent 45-min exposure at 45 degrees C. This thermotolerant state started to decay between 12 and 24 hr after the initial heat exposure. The decay appeared to initiate slightly sooner in the exponentially growing cells when compared to the fed plateau-phase cells. During the decay phase, the rate of thermotolerance decay was similar in the two growth conditions. In other experiments, cells were induced to divide at a slower rate by chronic growth (3 months) in a low concentration of fetal calf serum. Under these low serum conditions cells became more sensitive to heat and the rate of decay of thermotolerance remained the same for exponentially growing cells. Plateau-phase cells were also more sensitive, but thermotolerance decayed more rapidly in these cells. Although dramatic cell cycle perturbations were seen in the exponentially growing cells, these changes appeared not to be related to thermotolerance kinetics.
Bischof, Martin; Obermann, Caitriona; Hartmann, Matthias N; Hager, Oliver M; Kirschner, Matthias; Kluge, Agne; Strauss, Gregory P; Kaiser, Stefan
2016-11-22
Negative symptoms are considered core symptoms of schizophrenia. The Brief Negative Symptom Scale (BNSS) was developed to measure this symptomatic dimension according to a current consensus definition. The present study examined the psychometric properties of the German version of the BNSS. To expand former findings on convergent validity, we employed the Temporal Experience Pleasure Scale (TEPS), a hedonic self-report that distinguishes between consummatory and anticipatory pleasure. Additionally, we addressed convergent validity with observer-rated assessment of apathy with the Apathy Evaluation Scale (AES), which was completed by the patient's primary nurse. Data were collected from 75 in- and outpatients from the Psychiatric Hospital, University Zurich diagnosed with either schizophrenia or schizoaffective disorder. We assessed convergent and discriminant validity, internal consistency and inter-rater reliability. We largely replicated the findings of the original version showing good psychometric properties of the BNSS. In addition, the primary nurses evaluation correlated moderately with interview-based clinician rating. BNSS anhedonia items showed good convergent validity with the TEPS. Overall, the German BNSS shows good psychometric properties comparable to the original English version. Convergent validity extends beyond interview-based assessments of negative symptoms to self-rated anhedonia and observer-rated apathy.
A new look at atmospheric carbon dioxide
NASA Astrophysics Data System (ADS)
Hofmann, David J.; Butler, James H.; Tans, Pieter P.
Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.
The Exponential Function--Part VIII
ERIC Educational Resources Information Center
Bartlett, Albert A.
1978-01-01
Presents part eight of a continuing series on the exponential function in which, given the current population of the Earth and assuming a constant growth rate of 1.9 percent backward looks at world population are made. (SL)
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
On adaptive learning rate that guarantees convergence in feedforward networks.
Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan
2006-09-01
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger
NASA Astrophysics Data System (ADS)
Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.
2016-12-01
Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.
The Nazca-South American convergence rate and the recurrence of the great 1960 Chilean earthquake
NASA Technical Reports Server (NTRS)
Stein, S.; Engeln, J. F.; Demets, C.; Gordon, R. G.; Woods, D.
1986-01-01
The seismic slip rate along the Chile Trench estimated from the slip in the great 1960 earthquake and the recurrence history of major earthquakes has been interpreted as consistent with the subduction rate of the Nazca plate beneath South America. The convergence rate, estimated from global relative plate motion models, depends significantly on closure of the Nazca - Antarctica - South America circuit. NUVEL-1, a new plate motion model which incorporates recently determined spreading rates on the Chile Rise, shows that the average convergence rate over the last three million years is slower than previously estimated. If this time-averaged convergence rate provides an appropriate upper bound for the seismic slip rate, either the characteristic Chilean subduction earthquake is smaller than the 1960 event, the average recurrence interval is greater than observed in the last 400 years, or both. These observations bear out the nonuniformity of plate motions on various time scales, the variability in characteristic subduction zone earthquake size, and the limitations of recurrence time estimates.
USDA-ARS?s Scientific Manuscript database
A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-01-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
A variational method for analyzing limit cycle oscillations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; MacLaurin, James
2018-06-01
Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-06-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.
Geophysical constraints on geodynamic processes at convergent margins: A global perspective
NASA Astrophysics Data System (ADS)
Artemieva, Irina; Thybo, Hans; Shulgin, Alexey
2016-04-01
Convergent margins, being the boundaries between colliding lithospheric plates, form the most disastrous areas in the world due to intensive, strong seismicity and volcanism. We review global geophysical data in order to illustrate the effects of the plate tectonic processes at convergent margins on the crustal and upper mantle structure, seismicity, and geometry of subducting slab. We present global maps of free-air and Bouguer gravity anomalies, heat flow, seismicity, seismic Vs anomalies in the upper mantle, and plate convergence rate, as well as 20 profiles across different convergent margins. A global analysis of these data for three types of convergent margins, formed by ocean-ocean, ocean-continent, and continent-continent collisions, allows us to recognize the following patterns. (1) Plate convergence rate depends on the type of convergent margins and it is significantly larger when, at least, one of the plates is oceanic. However, the oldest oceanic plate in the Pacific ocean has the smallest convergence rate. (2) The presence of an oceanic plate is, in general, required for generation of high-magnitude (M N 8.0) earthquakes and for generating intermediate and deep seismicity along the convergent margins. When oceanic slabs subduct beneath a continent, a gap in the seismogenic zone exists at depths between ca. 250 km and 500 km. Given that the seismogenic zone terminates at ca. 200 km depth in case of continent-continent collision, we propose oceanic origin of subducting slabs beneath the Zagros, the Pamir, and the Vrancea zone. (3) Dip angle of the subducting slab in continent-ocean collision does not correlate neither with the age of subducting oceanic slab, nor with the convergence rate. For ocean-ocean subduction, clear trends are recognized: steeply dipping slabs are characteristic of young subducting plates and of oceanic plates with high convergence rate, with slab rotation towards a near-vertical dip angle at depths below ca. 500 km at very high convergence rate. (4) Local isostasy is not satisfied at the convergent margins as evidenced by strong free air gravity anomalies of positive and negative signs. However, near-isostatic equilibrium may exist in broad zones of distributed deformation such as Tibet. (5) No systematic patterns are recognized in heat flow data due to strong heterogeneity of measured values which are strongly affected by hydrothermal circulation, magmatic activity, crustal faulting, horizontal heat transfer, and also due to low number of heat flow measurements across many margins. (6) Low upper mantle Vs seismic velocities beneath the convergent margins are restricted to the upper 150 km and may be related to mantle wedge melting which is confined to shallow mantle levels. Artemieva, I.M., Thybo, H., and Shulgin, A., 2015. Geophysical constraints on geodynamic processes at convergent margins: A global perspective. Gondwana Research, http://dx.doi.org/10.1016/j.gr.2015.06.010
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Seismic behaviour of mountain belts controlled by plate convergence rate
NASA Astrophysics Data System (ADS)
Dal Zilio, Luca; van Dinther, Ylona; Gerya, Taras V.; Pranger, Casper C.
2018-01-01
The relative contribution of tectonic and kinematic processes to seismic behaviour of mountain belts is still controversial. To understand the partitioning between these processes we developed a model that simulates both tectonic and seismic processes in a continental collision setting. These 2D seismo-thermo-mechanical (STM) models obtain a Gutenberg-Richter frequency-magnitude distribution due to spontaneous events occurring throughout the orogen. Our simulations suggest that both the corresponding slope (b value) and maximum earthquake magnitude (MWmax) correlate linearly with plate convergence rate. By analyzing 1D rheological profiles and isotherm depths we demonstrate that plate convergence rate controls the brittle strength through a rheological feedback with temperature and strain rate. Faster convergence leads to cooler temperatures and also results in more larger seismogenic domains, thereby increasing both MWmax and the relative number of large earthquakes (decreasing b value). This mechanism also predicts a more seismogenic lower crust, which is confirmed by a transition from uni- to bi-modal hypocentre depth distributions in our models. This transition and a linear relation between convergence rate and b value and MWmax is supported by our comparison of earthquakes recorded across the Alps, Apennines, Zagros and Himalaya. These results imply that deformation in the Alps occurs in a more ductile manner compared to the Himalayas, thereby reducing its seismic hazard. Furthermore, a second set of experiments with higher temperature and different orogenic architecture shows the same linear relation with convergence rate, suggesting that large-scale tectonic structure plays a subordinate role. We thus propose that plate convergence rate, which also controls the average differential stress of the orogen and its linear relation to the b value, is the first-order parameter controlling seismic hazard of mountain belts.
Scaling behavior of ground-state energy cluster expansion for linear polyenes
NASA Astrophysics Data System (ADS)
Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.
Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.
Analytically Solvable Model of Spreading Dynamics with Non-Poissonian Processes
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Perotti, Juan I.; Kaski, Kimmo; Kertész, János
2014-01-01
Non-Poissonian bursty processes are ubiquitous in natural and social phenomena, yet little is known about their effects on the large-scale spreading dynamics. In order to characterize these effects, we devise an analytically solvable model of susceptible-infected spreading dynamics in infinite systems for arbitrary inter-event time distributions and for the whole time range. Our model is stationary from the beginning, and the role of the lower bound of inter-event times is explicitly considered. The exact solution shows that for early and intermediate times, the burstiness accelerates the spreading as compared to a Poisson-like process with the same mean and same lower bound of inter-event times. Such behavior is opposite for late-time dynamics in finite systems, where the power-law distribution of inter-event times results in a slower and algebraic convergence to a fully infected state in contrast to the exponential decay of the Poisson-like process. We also provide an intuitive argument for the exponent characterizing algebraic convergence.
NASA Technical Reports Server (NTRS)
Fromme, J. A.; Golberg, M. A.
1979-01-01
Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.
Anasontzis, George E; Salazar Penã, Margarita; Spadiut, Oliver; Brumer, Harry; Olsson, Lisbeth
2014-01-01
Optimization of protein production from methanol-induced Pichia pastoris cultures is necessary to ensure high productivity rates and high yields of recombinant proteins. We investigated the effects of temperature and different linear or exponential methanol-feeding rates on the production of recombinant Fusarium graminearum galactose oxidase (EC 1.1.3.9) in a P. pastoris Mut+ strain, under regulation of the AOX1 promoter. We found that low exponential methanol feeding led to 1.5-fold higher volumetric productivity compared to high exponential feeding rates. The duration of glycerol feeding did not affect the subsequent product yield, but longer glycerol feeding led to higher initial biomass concentration, which would reduce the oxygen demand and generate less heat during induction. A linear and a low exponential feeding profile led to productivities in the same range, but the latter was characterized by intense fluctuations in the titers of galactose oxidase and total protein. An exponential feeding profile that has been adapted to the apparent biomass concentration results in more stable cultures, but the concentration of recombinant protein is in the same range as when constant methanol feeding is employed. © 2014 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 30:728–735, 2014 PMID:24493559
Rounded stretched exponential for time relaxation functions.
Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B
2009-12-07
A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)
Maternal and child mortality indicators across 187 countries of the world: converging or diverging.
Goli, Srinivas; Arokiasamy, Perianayagam
2014-01-01
This study reassessed the progress achieved since 1990 in maternal and child mortality indicators to test whether the progress is converging or diverging across countries worldwide. The convergence process is examined using standard parametric and non-parametric econometric models of convergence. The results of absolute convergence estimates reveal that progress in maternal and child mortality indicators is diverging for the entire period of 1990-2010 [maternal mortality ratio (MMR) - β = .00033, p < .574; neonatal mortality rate (NNMR) - β = .04367, p < .000; post-neonatal mortality rate (PNMR) - β = .02677, p < .000; under-five mortality rate (U5MR) - β = .00828, p < .000)]. In the recent period, such divergence is replaced with convergence for MMR but diverged for all the child mortality indicators. The results of Kernel density estimate reveal considerable reduction in divergence of MMR for the recent period; however, the Kernel density distribution plots show more than one 'peak' which indicates the emergence of convergence clubs based on their mortality levels. For child mortality indicators, the Kernel estimates suggest that divergence is in progress across the countries worldwide but tended to converge for countries with low mortality levels. A mere progress in global averages of maternal and child mortality indicators among a global cross-section of countries does not warranty convergence unless there is a considerable reduction in variance, skewness and range of change.
Nonlinear Algorithms for Channel Equalization and Map Symbol Detection.
NASA Astrophysics Data System (ADS)
Giridhar, K.
The transfer of information through a communication medium invariably results in various kinds of distortion to the transmitted signal. In this dissertation, a feed -forward neural network-based equalizer, and a family of maximum a posteriori (MAP) symbol detectors are proposed for signal recovery in the presence of intersymbol interference (ISI) and additive white Gaussian noise. The proposed neural network-based equalizer employs a novel bit-mapping strategy to handle multilevel data signals in an equivalent bipolar representation. It uses a training procedure to learn the channel characteristics, and at the end of training, the multilevel symbols are recovered from the corresponding inverse bit-mapping. When the channel characteristics are unknown and no training sequences are available, blind estimation of the channel (or its inverse) and simultaneous data recovery is required. Convergence properties of several existing Bussgang-type blind equalization algorithms are studied through computer simulations, and a unique gain independent approach is used to obtain a fair comparison of their rates of convergence. Although simple to implement, the slow convergence of these Bussgang-type blind equalizers make them unsuitable for many high data-rate applications. Rapidly converging blind algorithms based on the principle of MAP symbol-by -symbol detection are proposed, which adaptively estimate the channel impulse response (CIR) and simultaneously decode the received data sequence. Assuming a linear and Gaussian measurement model, the near-optimal blind MAP symbol detector (MAPSD) consists of a parallel bank of conditional Kalman channel estimators, where the conditioning is done on each possible data subsequence that can convolve with the CIR. This algorithm is also extended to the recovery of convolutionally encoded waveforms in the presence of ISI. Since the complexity of the MAPSD algorithm increases exponentially with the length of the assumed CIR, a suboptimal decision-feedback mechanism is introduced to truncate the channel memory "seen" by the MAPSD section. Also, simpler gradient-based updates for the channel estimates, and a metric pruning technique are used to further reduce the MAPSD complexity. Spatial diversity MAP combiners are developed to enhance the error rate performance and combat channel fading. As a first application of the MAPSD algorithm, dual-mode recovery techniques for TDMA (time-division multiple access) mobile radio signals are presented. Combined estimation of the symbol timing and the multipath parameters is proposed, using an auxiliary extended Kalman filter during the training cycle, and then tracking of the fading parameters is performed during the data cycle using the blind MAPSD algorithm. For the second application, a single-input receiver is employed to jointly recover cochannel narrowband signals. Assuming known channels, this two-stage joint MAPSD (JMAPSD) algorithm is compared to the optimal joint maximum likelihood sequence estimator, and to the joint decision-feedback detector. A blind MAPSD algorithm for the joint recovery of cochannel signals is also presented. Computer simulation results are provided to quantify the performance of the various algorithms proposed in this dissertation.
Contingency, convergence and hyper-astronomical numbers in biological evolution.
Louis, Ard A
2016-08-01
Counterfactual questions such as "what would happen if you re-run the tape of life?" turn on the nature of the landscape of biological possibilities. Since the number of potential sequences that store genetic information grows exponentially with length, genetic possibility spaces can be so unimaginably vast that commentators frequently reach of hyper-astronomical metaphors that compare their size to that of the universe. Re-run the tape of life and the likelihood of encountering the same sequences in such hyper-astronomically large spaces is infinitesimally small, suggesting that evolutionary outcomes are highly contingent. On the other hand, the wide-spread occurrence of evolutionary convergence implies that similar phenotypes can be found again with relative ease. How can this be? Part of the solution to this conundrum must lie in the manner that genotypes map to phenotypes. By studying simple genotype-phenotype maps, where the counterfactual space of all possible phenotypes can be enumerated, it is shown that strong bias in the arrival of variation may explain why certain phenotypes are (repeatedly) observed in nature, while others never appear. This biased variation provides a non-selective cause for certain types of convergence. It illustrates how the role of randomness and contingency may differ significantly between genetic and phenotype spaces. Copyright © 2016 Elsevier Ltd. All rights reserved.
GLOBAL RATES OF CONVERGENCE OF THE MLES OF LOG-CONCAVE AND s-CONCAVE DENSITIES
Doss, Charles R.; Wellner, Jon A.
2017-01-01
We establish global rates of convergence for the Maximum Likelihood Estimators (MLEs) of log-concave and s-concave densities on ℝ. The main finding is that the rate of convergence of the MLE in the Hellinger metric is no worse than n−2/5 when −1 < s < ∞ where s = 0 corresponds to the log-concave case. We also show that the MLE does not exist for the classes of s-concave densities with s < −1. PMID:28966409
Attractors of three-dimensional fast-rotating Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Trahe, Markus
The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.
Generalized Bregman distances and convergence rates for non-convex regularization methods
NASA Astrophysics Data System (ADS)
Grasmair, Markus
2010-11-01
We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
Magnified gradient function with deterministic weight modification in adaptive learning.
Ng, Sin-Chun; Cheung, Chi-Chung; Leung, Shu-Hung
2004-11-01
This paper presents two novel approaches, backpropagation (BP) with magnified gradient function (MGFPROP) and deterministic weight modification (DWM), to speed up the convergence rate and improve the global convergence capability of the standard BP learning algorithm. The purpose of MGFPROP is to increase the convergence rate by magnifying the gradient function of the activation function, while the main objective of DWM is to reduce the system error by changing the weights of a multilayered feedforward neural network in a deterministic way. Simulation results show that the performance of the above two approaches is better than BP and other modified BP algorithms for a number of learning problems. Moreover, the integration of the above two approaches forming a new algorithm called MDPROP, can further improve the performance of MGFPROP and DWM. From our simulation results, the MDPROP algorithm always outperforms BP and other modified BP algorithms in terms of convergence rate and global convergence capability.
Experimental research of iterated dynamics for the complex exponentials with linear term
NASA Astrophysics Data System (ADS)
Matyushkin, Igor V.; Zapletina, Maria A.
2018-03-01
The research of the orbit of the point zero, fixed points, Julia and Fatou sets for the iterated complex-valued exponential is carried by means of computer experiment. The object of study is three one-parameter families based on exp (iz): f : z → (1 + μ) exp (iz), g : z → (1 + μ|z ‑ z*|) exp (iz), h : z → (1 + μ (z ‑ z*)) exp (iz). Here. For the first family 17- and 2-periodic regimes are detected when passing near the bifurcation value μ ≈ 2.475i, while the multiplicator equals 1. The second family shows a more interesting behavior: (i) three-valley structure of isolines of the convergence rate near fixpoint z* at μ = 0 + 1 + i; (ii) saddle-node transition when the parameter moves along a straight line Reμ = 0, leading to the appearance of a second fixpoint and loss of stability by the old fixpoint at Imμ = 2.1682; (iii) the nontrivial nature of the orbits of points in the vicinity of the new fixpoint and the presence of false fixpoints in the portrait of the Julia set; (iv) second phase transition leading to a radical change in the form of the Julia and Fatou sets at μ ≃ 2.5i. The dynamics of the third family during movement at Reμ = 0 is similar to the first case, but 17th and 2nd periodic modes are replaced by 39th and 3rd modes. Transitions 17 → 2 and 39 → 3 seem to be rapid and discreet while their geometric interpretation matches the ratios 17=1+2*8, 39=13*3. At μ = |z*|‑1 for the h- family Julia set fills the entire complex plane.
Analog detection for cavity lifetime spectroscopy
Zare, Richard N.; Harb, Charles C.; Paldus, Barbara A.; Spence, Thomas G.
2001-05-15
An analog detection system for determining a ring-down rate or decay rate 1/.tau. of an exponentially decaying ring-down beam issuing from a lifetime or ring-down cavity during a ring-down phase. Alternatively, the analog detection system determines a build-up rate of an exponentially growing beam issuing from the cavity during a ring-up phase. The analog system can be employed in continuous wave cavity ring-down spectroscopy (CW CRDS) and pulsed CRDS (P CRDS) arrangements utilizing any type of ring-down cavity including ring-cavities and linear cavities.
Analog detection for cavity lifetime spectroscopy
Zare, Richard N.; Harb, Charles C.; Paldus, Barbara A.; Spence, Thomas G.
2003-01-01
An analog detection system for determining a ring-down rate or decay rate 1/.tau. of an exponentially decaying ring-down beam issuing from a lifetime or ring-down cavity during a ring-down phase. Alternatively, the analog detection system determines a build-up rate of an exponentially growing beam issuing from the cavity during a ring-up phase. The analog system can be employed in continuous wave cavity ring-down spectroscopy (CW CRDS) and pulsed CRDS (P CRDS) arrangements utilizing any type of ring-down cavity including ring-cavities and linear cavities.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
Linearized traveling wave amplifier with hard limiter characteristics
NASA Technical Reports Server (NTRS)
Kosmahl, H. G. (Inventor)
1986-01-01
A dynamic velocity taper is provided for a traveling wave tube with increased linearity to avoid intermodulation of signals being amplified. In a traveling wave tube, the slow wave structure is a helix including a sever. A dynamic velocity taper is provided by gradually reducing the spacing between the repeating elements of the slow wave structure which are the windings of the helix. The reduction which takes place coincides with the ouput point of helix. The spacing between the repeating elements of the slow wave structure is ideally at an exponential rate because the curve increases the point of maximum efficiency and power, at an exponential rate. A coupled cavity traveling wave tube having cavities is shown. The space between apertured discs is gradually reduced from 0.1% to 5% at an exponential rate. Output power (or efficiency) versus input power for a commercial tube is shown.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
Performance Ratings: Designs for Evaluating Their Validity and Accuracy.
1986-07-01
ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of
Harrison, John A
2008-09-04
RHF/aug-cc-pVnZ, UHF/aug-cc-pVnZ, and QCISD/aug-cc-pVnZ, n = 2-5, potential energy curves of H2 X (1) summation g (+) are analyzed by Fourier transform methods after transformation to a new coordinate system via an inverse hyperbolic cosine coordinate mapping. The Fourier frequency domain spectra are interpreted in terms of underlying mathematical behavior giving rise to distinctive features. There is a clear difference between the underlying mathematical nature of the potential energy curves calculated at the HF and full-CI levels. The method is particularly suited to the analysis of potential energy curves obtained at the highest levels of theory because the Fourier spectra are observed to be of a compact nature, with the envelope of the Fourier frequency coefficients decaying in magnitude in an exponential manner. The finite number of Fourier coefficients required to describe the CI curves allows for an optimum sampling strategy to be developed, corresponding to that required for exponential and geometric convergence. The underlying random numerical noise due to the finite convergence criterion is also a clearly identifiable feature in the Fourier spectrum. The methodology is applied to the analysis of MRCI potential energy curves for the ground and first excited states of HX (X = H-Ne). All potential energy curves exhibit structure in the Fourier spectrum consistent with the existence of resonances. The compact nature of the Fourier spectra following the inverse hyperbolic cosine coordinate mapping is highly suggestive that there is some advantage in viewing the chemical bond as having an underlying hyperbolic nature.
Solving Upwind-Biased Discretizations: Defect-Correction Iterations
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
1999-01-01
This paper considers defect-correction solvers for a second order upwind-biased discretization of the 2D convection equation. The following important features are reported: (1) The asymptotic convergence rate is about 0.5 per defect-correction iteration. (2) If the operators involved in defect-correction iterations have different approximation order, then the initial convergence rates may be very slow. The number of iterations required to get into the asymptotic convergence regime might grow on fine grids as a negative power of h. In the case of a second order target operator and a first order driver operator, this number of iterations is roughly proportional to h-1/3. (3) If both the operators have the second approximation order, the defect-correction solver demonstrates the asymptotic convergence rate after three iterations at most. The same three iterations are required to converge algebraic error below the truncation error level. A novel comprehensive half-space Fourier mode analysis (which, by the way, can take into account the influence of discretized outflow boundary conditions as well) for the defect-correction method is developed. This analysis explains many phenomena observed in solving non-elliptic equations and provides a close prediction of the actual solution behavior. It predicts the convergence rate for each iteration and the asymptotic convergence rate. As a result of this analysis, a new very efficient adaptive multigrid algorithm solving the discrete problem to within a given accuracy is proposed. Numerical simulations confirm the accuracy of the analysis and the efficiency of the proposed algorithm. The results of the numerical tests are reported.
Burns, Kevin J; Shultz, Allison J; Title, Pascal O; Mason, Nicholas A; Barker, F Keith; Klicka, John; Lanyon, Scott M; Lovette, Irby J
2014-06-01
Thraupidae is the second largest family of birds and represents about 4% of all avian species and 12% of the Neotropical avifauna. Species in this family display a wide range of plumage colors and patterns, foraging behaviors, vocalizations, ecotypes, and habitat preferences. The lack of a complete phylogeny for tanagers has hindered the study of this evolutionary diversity. Here, we present a comprehensive, species-level phylogeny for tanagers using six molecular markers. Our analyses identified 13 major clades of tanagers that we designate as subfamilies. In addition, two species are recognized as distinct branches on the tanager tree. Our topologies disagree in many places with previous estimates of relationships within tanagers, and many long-recognized genera are not monophyletic in our analyses. Our trees identify several cases of convergent evolution in plumage ornaments and bill morphology, and two cases of social mimicry. The phylogeny produced by this study provides a robust framework for studying macroevolutionary patterns and character evolution. We use our new phylogeny to study diversification processes, and find that tanagers show a background model of exponentially declining diversification rates. Thus, the evolution of tanagers began with an initial burst of diversification followed by a rate slowdown. In addition to this background model, two later, clade-specific rate shifts are supported, one increase for Darwin's finches and another increase for some species of Sporophila. The rate of diversification within these two groups is exceptional, even when compared to the overall rapid rate of diversification found within tanagers. This study provides the first robust assessment of diversification rates for the Darwin's finches in the context of the larger group within which they evolved. Copyright © 2014 Elsevier Inc. All rights reserved.
Time evolution of predictability of epidemics on networks.
Holme, Petter; Takaguchi, Taro
2015-04-01
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information-i.e., knowing the state of each individual with respect to the disease-the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
The stationary non-equilibrium plasma of cosmic-ray electrons and positrons
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2016-06-01
The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.
Time evolution of predictability of epidemics on networks
NASA Astrophysics Data System (ADS)
Holme, Petter; Takaguchi, Taro
2015-04-01
Epidemic outbreaks of new pathogens, or known pathogens in new populations, cause a great deal of fear because they are hard to predict. For theoretical models of disease spreading, on the other hand, quantities characterizing the outbreak converge to deterministic functions of time. Our goal in this paper is to shed some light on this apparent discrepancy. We measure the diversity of (and, thus, the predictability of) outbreak sizes and extinction times as functions of time given different scenarios of the amount of information available. Under the assumption of perfect information—i.e., knowing the state of each individual with respect to the disease—the predictability decreases exponentially, or faster, with time. The decay is slowest for intermediate values of the per-contact transmission probability. With a weaker assumption on the information available, assuming that we know only the fraction of currently infectious, recovered, or susceptible individuals, the predictability also decreases exponentially most of the time. There are, however, some peculiar regions in this scenario where the predictability decreases. In other words, to predict its final size with a given accuracy, we would need increasingly more information about the outbreak.
Improving Strategies via SMT Solving
NASA Astrophysics Data System (ADS)
Gawlitza, Thomas Martin; Monniaux, David
We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of Gawlitza and Seidl [17]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Π2 P -complete.
The behavior of a convergent plate boundary - Crustal deformation in the South Kanto district, Japan
NASA Technical Reports Server (NTRS)
Scholz, C. H.; Kato, T.
1978-01-01
The northwesternmost part of the Sagami trough, a part of the Philippine Sea-Eurasian plate boundary, was ruptured during the great South Kanto earthquake in 1923. Very extensive and frequent geodetic measurements of crustal deformation have been made in the South Kanto district since the 1890's, and these constitute the most complete data set on crustal movements in the world. These data were reanalyzed and interpreted and according to our interpretation indicate the following sequence of events. The coseismic movements were due to oblique thrust and right lateral slip of about 8 m on a fault outcropping at the base of the Sagami trough. This was followed by postseismic deformation resulting from reversed afterslip of 20-60 cm that occurred at an exponentially decaying rate in time. The interseismic deformation is produced by steady subduction at a rate of about 1.8 cm/yr. During subduction the top 10-15 km of the plate boundary is apparently locked, while deeper parts slip aseismically at an irregular rate. No significant precursory deformation was observed. The recurrence time for 1923 type earthquakes is 200-300 years. The Boso and Miura peninsulas are broken into a series of fault-bound blocks that move semi-independently of the surrounding region. The subduction zone itself, where it is exposed on land, is shown to be a wide zone encompassing several faults that are active at different times.
Experimental Magnetohydrodynamic Energy Extraction from a Pulsed Detonation
2015-03-01
experimental data taken in this thesis will follow voltage profiles similar to Fig. 2. Notice the initial section in Fig. 2 shows exponential decay consistent...equal that time constant. The exponential curves in Fig. 2 show how changing the time constant can change the charge and/or discharge rate of the...see Fig. 1), at a sampling rate of 1 MHz. Shielded wire and a common ground were used throughout the DAQ system to avoid capacitive issues in the
Nonlinear convergence active vibration absorber for single and multiple frequency vibration control
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Guo, Shufeng; Zhao, Wenqiang
2017-12-01
This paper presents a nonlinear convergence algorithm for active dynamic undamped vibration absorber (ADUVA). The damping of absorber is ignored in this algorithm to strengthen the vibration suppressing effect and simplify the algorithm at the same time. The simulation and experimental results indicate that this nonlinear convergence ADUVA can help significantly suppress vibration caused by excitation of both single and multiple frequency. The proposed nonlinear algorithm is composed of equivalent dynamic modeling equations and frequency estimator. Both the single and multiple frequency ADUVA are mathematically imitated by the same mechanical structure with a mass body and a voice coil motor (VCM). The nonlinear convergence estimator is applied to simultaneously satisfy the requirements of fast convergence rate and small steady state frequency error, which are incompatible for linear convergence estimator. The convergence of the nonlinear algorithm is mathematically proofed, and its non-divergent characteristic is theoretically guaranteed. The vibration suppressing experiments demonstrate that the nonlinear ADUVA can accelerate the convergence rate of vibration suppressing and achieve more decrement of oscillation attenuation than the linear ADUVA.
Gonzalez-Gil, Graciela; Kleerebezem, Robbert; Lettinga, Gatze
1999-01-01
When metals were added in a pulse mode to methylotrophic-methanogenic biomass, three methane production rate phases were recognized. Increased concentrations of Ni and Co accelerated the initial exponential and final arithmetic increases in the methane production rate and reduced the temporary decrease in the rate. When Ni and Co were added continuously, the temporary decrease phase was eliminated and the exponential production rate increased. We hypothesize that the temporary decrease in the methane production rate and the final arithmetic increase in the methane production rate were due to micronutrient limitations and that the precipitation-dissolution kinetics of metal sulfides may play a key role in the biovailability of these compounds. PMID:10103284
Gonzalez-Gil, G; Kleerebezem, R; Lettinga, G
1999-04-01
When metals were added in a pulse mode to methylotrophic-methanogenic biomass, three methane production rate phases were recognized. Increased concentrations of Ni and Co accelerated the initial exponential and final arithmetic increases in the methane production rate and reduced the temporary decrease in the rate. When Ni and Co were added continuously, the temporary decrease phase was eliminated and the exponential production rate increased. We hypothesize that the temporary decrease in the methane production rate and the final arithmetic increase in the methane production rate were due to micronutrient limitations and that the precipitation-dissolution kinetics of metal sulfides may play a key role in the biovailability of these compounds.
Weighted least squares phase unwrapping based on the wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Jiafeng; Chen, Haiqin; Yang, Zhengang; Ren, Haixia
2007-01-01
The weighted least squares phase unwrapping algorithm is a robust and accurate method to solve phase unwrapping problem. This method usually leads to a large sparse linear equation system. Gauss-Seidel relaxation iterative method is usually used to solve this large linear equation. However, this method is not practical due to its extremely slow convergence. The multigrid method is an efficient algorithm to improve convergence rate. However, this method needs an additional weight restriction operator which is very complicated. For this reason, the multiresolution analysis method based on the wavelet transform is proposed. By applying the wavelet transform, the original system is decomposed into its coarse and fine resolution levels and an equivalent equation system with better convergence condition can be obtained. Fast convergence in separate coarse resolution levels speeds up the overall system convergence rate. The simulated experiment shows that the proposed method converges faster and provides better result than the multigrid method.
Partha, Raghavendran; Chauhan, Bharesh K; Ferreira, Zelia; Robinson, Joseph D; Lathrop, Kira; Nischal, Ken K
2017-01-01
The underground environment imposes unique demands on life that have led subterranean species to evolve specialized traits, many of which evolved convergently. We studied convergence in evolutionary rate in subterranean mammals in order to associate phenotypic evolution with specific genetic regions. We identified a strong excess of vision- and skin-related genes that changed at accelerated rates in the subterranean environment due to relaxed constraint and adaptive evolution. We also demonstrate that ocular-specific transcriptional enhancers were convergently accelerated, whereas enhancers active outside the eye were not. Furthermore, several uncharacterized genes and regulatory sequences demonstrated convergence and thus constitute novel candidate sequences for congenital ocular disorders. The strong evidence of convergence in these species indicates that evolution in this environment is recurrent and predictable and can be used to gain insights into phenotype–genotype relationships. PMID:29035697
Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.
ERIC Educational Resources Information Center
Venugopalan, Mundiyath
1991-01-01
Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…
Growth of Juniperus and Potentilla using Liquid Exponential and Controlled-release Fertilizers
R. Kasten Dumroese
2003-01-01
Juniperus scopularum Sarg. (Rocky Mountain juniper) and Potentilla fruticosa L. 'Gold Drop (gold drop potentilla) plants grown in containers had similar or better morphology, higher nitrogen concentrations and contents, and higher N-use efficiency when grown with liquid fertilizer applied at an exponentially increasing rate as...
Exploring Exponential Decay Using Limited Resources
ERIC Educational Resources Information Center
DePierro, Ed; Garafalo, Fred; Gordon, Patrick
2018-01-01
Science students need exposure to activities that will help them to become familiar with phenomena exhibiting exponential decay. This paper describes an experiment that allows students to determine the rate of thermal energy loss by a hot object to its surroundings. It requires limited equipment, is safe, and gives reasonable results. Students…
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 1; Analysis
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
Extensive slow-crack-growth (SCG) analysis was made using a primary exponential crack-velocity formulation under three widely used load configurations: constant stress rate, constant stress, and cyclic stress. Although the use of the exponential formulation in determining SCG parameters of a material requires somewhat inconvenient numerical procedures, the resulting solutions presented gave almost the same degree of simplicity in both data analysis and experiments as did the power-law formulation. However, the fact that the inert strength of a material should be known in advance to determine the corresponding SCG parameters was a major drawback of the exponential formulation as compared with the power-law formulation.
Orderings for conjugate gradient preconditionings
NASA Technical Reports Server (NTRS)
Ortega, James M.
1991-01-01
The effect of orderings on the rate of convergence of the conjugate gradient method with SSOR or incomplete Cholesky preconditioning is examined. Some results also are presented that help to explain why red/black ordering gives an inferior rate of convergence.
NASA Astrophysics Data System (ADS)
Fan, Meng; Ye, Dan
2005-09-01
This paper studies the dynamics of a system of retarded functional differential equations (i.e., RF=Es), which generalize the Hopfield neural network models, the bidirectional associative memory neural networks, the hybrid network models of the cellular neural network type, and some population growth model. Sufficient criteria are established for the globally exponential stability and the existence and uniqueness of pseudo almost periodic solution. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. The paper ends with some applications of the main results to some neural network models and population growth models and numerical simulations.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Sanner, Robert M.
2001-01-01
A nonlinear control scheme for attitude control of a spacecraft is combined with a nonlinear gyro bias observer for the case of constant gyro bias, in the presence of gyro noise. The observer bias estimates converge exponentially to a mean square bound determined by the standard deviation of the gyro noise. The resulting coupled, closed loop dynamics are proven to be globally stable, with asymptotic tracking which is also mean square bounded. A simulation of the proposed observer-controller design is given for a rigid spacecraft tracking a specified, time-varying attitude sequence to illustrate the theoretical claims.
Fragile X syndrome neurobiology translates into rational therapy.
Braat, Sien; Kooy, R Frank
2014-04-01
Causal genetic defects have been identified for various neurodevelopmental disorders. A key example in this respect is fragile X syndrome, one of the most frequent genetic causes of intellectual disability and autism. Since the discovery of the causal gene, insights into the underlying pathophysiological mechanisms have increased exponentially. Over the past years, defects were discovered in pathways that are potentially amendable by pharmacological treatment. These findings have inspired the initiation of clinical trials in patients. The targeted pathways converge in part with those of related neurodevelopmental disorders raising hopes that the treatments developed for this specific disorder might be more broadly applicable. Copyright © 2014 Elsevier Ltd. All rights reserved.
A novel continuous fractional sliding mode control
NASA Astrophysics Data System (ADS)
Muñoz-Vázquez, A. J.; Parra-Vega, V.; Sánchez-Orta, A.
2017-10-01
A new fractional-order controller is proposed, whose novelty is twofold: (i) it withstands a class of continuous but not necessarily differentiable disturbances as well as uncertainties and unmodelled dynamics, and (ii) based on a principle of dynamic memory resetting of the differintegral operator, it is enforced an invariant sliding mode in finite time. Both (i) and (ii) account for exponential convergence of tracking errors, where such principle is instrumental to demonstrate the closed-loop stability, robustness and a sustained sliding motion, as well as that high frequencies are filtered out from the control signal. The proposed methodology is illustrated with a representative simulation study.
Two-dimensional, phase modulated lattice sums with application to the Helmholtz Green’s function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linton, C. M., E-mail: C.M.Linton@lboro.ac.uk
2015-01-15
A class of two-dimensional phase modulated lattice sums in which the denominator is an indefinite quadratic polynomial Q is expressed in terms of a single, exponentially convergent series of elementary functions. This expression provides an extremely efficient method for the computation of the quasi-periodic Green’s function for the Helmholtz equation that arises in a number of physical contexts when studying wave propagation through a doubly periodic medium. For a class of sums in which Q is positive definite, our new result can be used to generate representations in terms of θ-functions which are significant generalisations of known results.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
2002-01-01
The life prediction analysis based on an exponential crack velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress-rate ("dynamic fatigue") and preload testing at ambient and elevated temperatures. The data fit to the strength versus In (stress rate) relation was found to be very reasonable for most of the materials. It was also found that preloading technique was equally applicable for the case of slow crack growth (SCG) parameter n > 30. The major limitation in the exponential crack velocity formulation, however, was that an inert strength of a material must be known priori to evaluate the important SCG parameter n, a significant drawback as compared to the conventional power-law crack velocity formulation.
DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro
2016-10-01
This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
Analog and digital transport of RF channels over converged 5G wireless-optical networks
NASA Astrophysics Data System (ADS)
Binh, Le Nguyen
2016-02-01
Under the exponential increase demand by the emerging 5G wireless access networking and thus data-center based Internet, novel and economical transport of RF channels to and from wireless access systems. This paper presents the transport technologies of RF channels over the analog and digital domain so as to meet the demands of the transport capacity reaching multi-Tbps, in the followings: (i) The convergence of 5G broadband wireless and optical networks and its demands on capacity delivery and network structures; (ii) Analog optical technologies for delivery of both the information and RF carriers to and from multiple-input multiple-output (MIMO) antenna sites so as to control the beam steering of MIMO antenna in the mmW at either 28.6 GHz and 56.8 GHz RF carrier and delivery of channels of aggregate capacity reaching several Tbps; (ii) Transceiver employing advanced digital modulation formats and digital signal processing (DSP) so as to provide 100G and beyond transmission rate to meet the ultra-high capacity demands with flexible spectral grids, hence pay-on-demand services. The interplay between DSP-based and analog transport techniques is examined; (iii) Transport technologies for 5G cloud access networks and associate modulation and digital processing techniques for capacity efficiency; and (iv) Finally the integrated optic technologies with novel lasers, comb generators and simultaneous dual function photonic devices for both demultiplexing/multiplexing and modulation are proposed, hence a system on chip structure can be structured. Quantum dot lasers and matrixes of micro ring resonators are integrated on the same Si-on-Silica substrate are proposed and described.
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
NASA Astrophysics Data System (ADS)
Traversa, Fabio L.; Di Ventra, Massimiliano
2017-02-01
We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.
Analysis of two production inventory systems with buffer, retrials and different production rates
NASA Astrophysics Data System (ADS)
Jose, K. P.; Nair, Salini S.
2017-09-01
This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan
2016-11-01
Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
77 FR 44571 - Rate Regulation Reforms
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-30
... current T-bill rate to the U.S. Prime Rate, as published in The Wall Street Journal. Additional... ``exponential'' approach, the total cumulative reparations payment (including interest) is calculated by...
NASA Astrophysics Data System (ADS)
Verma, Arjun; Privman, Vladimir
2018-02-01
We study approach to the large-time jammed state of the deposited particles in the model of random sequential adsorption. The convergence laws are usually derived from the argument of Pomeau which includes the assumption of the dominance, at large enough times, of small landing regions into each of which only a single particle can be deposited without overlapping earlier deposited particles and which, after a certain time are no longer created by depositions in larger gaps. The second assumption has been that the size distribution of gaps open for particle-center landing in this large-time small-gaps regime is finite in the limit of zero gap size. We report numerical Monte Carlo studies of a recently introduced model of random sequential adsorption on patterned one-dimensional substrates that suggest that the second assumption must be generalized. We argue that a region exists in the parameter space of the studied model in which the gap-size distribution in the Pomeau large-time regime actually linearly vanishes at zero gap sizes. In another region, the distribution develops a threshold property, i.e., there are no small gaps below a certain gap size. We discuss the implications of these findings for new asymptotic power-law and exponential-modified-by-a-power-law convergences to jamming in irreversible one-dimensional deposition.
Statistical independence of the initial conditions in chaotic mixing.
García de la Cruz, J M; Vassilicos, J C; Rossi, L
2017-11-01
Experimental evidence of the scalar convergence towards a global strange eigenmode independent of the scalar initial condition in chaotic mixing is provided. This convergence, underpinning the independent nature of chaotic mixing in any passive scalar, is presented by scalar fields with different initial conditions casting statistically similar shapes when advected by periodic unsteady flows. As the scalar patterns converge towards a global strange eigenmode, the scalar filaments, locally aligned with the direction of maximum stretching, as described by the Lagrangian stretching theory, stack together in an inhomogeneous pattern at distances smaller than their asymptotic minimum widths. The scalar variance decay becomes then exponential and independent of the scalar diffusivity or initial condition. In this work, mixing is achieved by advecting the scalar using a set of laminar flows with unsteady periodic topology. These flows, that resemble the tendril-whorl map, are obtained by morphing the forcing geometry in an electromagnetic free surface 2D mixing experiment. This forcing generates a velocity field which periodically switches between two concentric hyperbolic and elliptic stagnation points. In agreement with previous literature, the velocity fields obtained produce a chaotic mixer with two regions: a central mixing and an external extensional area. These two regions are interconnected through two pairs of fluid conduits which transfer clean and dyed fluid from the extensional area towards the mixing region and a homogenized mixture from the mixing area towards the extensional region.
1977-09-01
process with an event streaa intensity (rate) function that is of degree-two exponential pclyncaial foru. (The use of exponential pclynoaials is...4 \\v 01 ^3 C \\ \\ •r- S_ \\ \\ O \\ \\ a \\ \\ V IA C 4-> \\ \\ •«- c \\ 1 <— 3 • o \\ \\ Ol (J \\ \\ O U —1 <o \\ I...would serve as a good initial approxiaation t* , f-r the Newton-Raphson aethod. However, for the purpose of this implementation, the end point which
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
Nunes, F P; Garcia, Q S
2015-05-01
The study of litter decomposition and nutrient cycling is essential to know native forests structure and functioning. Mathematical models can help to understand the local and temporal litter fall variations and their environmental variables relationships. The objective of this study was test the adequacy of mathematical models for leaf litter decomposition in the Atlantic Forest in southeastern Brazil. We study four native forest sites in Parque Estadual do Rio Doce, a Biosphere Reserve of the Atlantic, which were installed 200 bags of litter decomposing with 20 × 20 cm nylon screen of 2 mm, with 10 grams of litter. Monthly from 09/2007 to 04/2009, 10 litterbags were removed for determination of the mass loss. We compared 3 nonlinear models: 1 - Olson Exponential Model (1963), which considers the constant K, 2 - Model proposed by Fountain and Schowalter (2004), 3 - Model proposed by Coelho and Borges (2005), which considers the variable K through QMR, SQR, SQTC, DMA and Test F. The Fountain and Schowalter (2004) model was inappropriate for this study by overestimating decomposition rate. The decay curve analysis showed that the model with the variable K was more appropriate, although the values of QMR and DMA revealed no significant difference (p > 0.05) between the models. The analysis showed a better adjustment of DMA using K variable, reinforced by the values of the adjustment coefficient (R2). However, convergence problems were observed in this model for estimate study areas outliers, which did not occur with K constant model. This problem can be related to the non-linear fit of mass/time values to K variable generated. The model with K constant shown to be adequate to describe curve decomposition for separately areas and best adjustability without convergence problems. The results demonstrated the adequacy of Olson model to estimate tropical forest litter decomposition. Although use of reduced number of parameters equaling the steps of the decomposition process, no difficulties of convergence were observed in Olson model. So, this model can be used to describe decomposition curves in different types of environments, estimating K appropriately.
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
NASA Astrophysics Data System (ADS)
Poole, Gregory B.; Mutch, Simon J.; Croton, Darren J.; Wyithe, Stuart
2017-12-01
We introduce GBPTREES: an algorithm for constructing merger trees from cosmological simulations, designed to identify and correct for pathological cases introduced by errors or ambiguities in the halo finding process. GBPTREES is built upon a halo matching method utilizing pseudo-radial moments constructed from radially sorted particle ID lists (no other information is required) and a scheme for classifying merger tree pathologies from networks of matches made to-and-from haloes across snapshots ranging forward-and-backward in time. Focusing on SUBFIND catalogues for this work, a sweep of parameters influencing our merger tree construction yields the optimal snapshot cadence and scanning range required for converged results. Pathologies proliferate when snapshots are spaced by ≲0.128 dynamical times; conveniently similar to that needed for convergence of semi-analytical modelling, as established by Benson et al. Total merger counts are converged at the level of ∼5 per cent for friends-of-friends (FoF) haloes of size np ≳ 75 across a factor of 512 in mass resolution, but substructure rates converge more slowly with mass resolution, reaching convergence of ∼10 per cent for np ≳ 100 and particle mass mp ≲ 109 M⊙. We present analytic fits to FoF and substructure merger rates across nearly all observed galactic history (z ≤ 8.5). While we find good agreement with the results presented by Fakhouri et al. for FoF haloes, a slightly flatter dependence on merger ratio and increased major merger rates are found, reducing previously reported discrepancies with extended Press-Schechter estimates. When appropriately defined, substructure merger rates show a similar mass ratio dependence as FoF rates, but with stronger mass and redshift dependencies for their normalization.
Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.
Bayati, Basil S
2016-05-01
We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Boundedness and exponential convergence in a chemotaxis model for tumor invasion
NASA Astrophysics Data System (ADS)
Jin, Hai-Yang; Xiang, Tian
2016-12-01
We revisit the following chemotaxis system modeling tumor invasion {ut=Δu-∇ṡ(u∇v),x∈Ω,t>0,vt=Δv+wz,x∈Ω,t>0,wt=-wz,x∈Ω,t>0,zt=Δz-z+u,x∈Ω,t>0, in a smooth bounded domain Ω \\subset {{{R}}n}(n≥slant 1) with homogeneous Neumann boundary and initial conditions. This model was recently proposed by Fujie et al (2014 Adv. Math. Sci. Appl. 24 67-84) as a model for tumor invasion with the role of extracellular matrix incorporated, and was analyzed later by Fujie et al (2016 Discrete Contin. Dyn. Syst. 36 151-69), showing the uniform boundedness and convergence for n≤slant 3 . In this work, we first show that the {{L}∞} -boundedness of the system can be reduced to the boundedness of \\parallel u(\\centerdot,t){{\\parallel}{{L\\frac{n{4}+ɛ}}(Ω )}} for some ɛ >0 alone, and then, for n≥slant 4 , if the initial data \\parallel {{u}0}{{\\parallel}{{L\\frac{n{4}}}}} , \\parallel {{z}0}{{\\parallel}{{L\\frac{n{2}}}}} and \\parallel \
128 Gb/s TWDM PON system using dispersion-supported transmission method
NASA Astrophysics Data System (ADS)
Bindhaiq, Salem; Zulkifli, Nadiatulhuda; Supa'at, Abusahmah M.; Idrus, Sevia M.; Salleh, M. S.
2017-11-01
Time and wavelength division multiplexed passive optical network (TWDM-PON) trend is considered as the most extraordinary trend of the next generation solution to accommodate exponential traffic growth for converged new services. In this paper, we briefly review recent progress on TWDM-PON system through the use of low cost directly modulated lasers (DMLs) transmission for various line rate transmissions to date. Furthermore, through simulation, we propose and evaluate a cost effective way to upgrade TWDM-PON up to a symmetric capacity of 128 Gb/s using fiber Bragg gratings (FBGs) in optical line terminal (OLT) as a paramount dispersion manager in high speed light-wave systems in both upstream and downstream directions. A low cost and potential chirpless directed modulated grating laser (DMGL) is employed for downstream link and DML with a single delay-interferometer (DI) is employed for upstream link. After illustrating the demonstrated system architecture and configuration, we present the results and analysis to prove the system feasibility. The results show that a successful transmission is achieved over 40 km single mode fiber with a power budget of 33.7 dB, which could support 1:256 splitting ratio.
Test of the Hill Stability Criterion against Chaos Indicators
NASA Astrophysics Data System (ADS)
Satyal, Suman; Quarles, Billy; Hinse, Tobias
2012-10-01
The efficacy of Hill Stability (HS) criterion is tested against other known chaos indicators such as Maximum Lyapunov Exponents (MLE) and Mean Exponential Growth of Nearby Orbits (MEGNO) maps. First, orbits of four observationally verified binary star systems: γ Cephei, Gliese-86, HD41004, and HD196885 are integrated using standard integration packages (MERCURY, SWIFTER, NBI, C/C++). The HS which measures orbital perturbation of a planet around the primary star due to the secondary star is calculated for each system. The LEs spectra are generated to measure the divergence/convergence rate of stable manifolds and the MEGNO maps are generated by using the variational equations of the system during the integration process. These maps allow to accurately differentiate between stable and unstable dynamical systems. Then the results obtained from the analysis of HS, MLE, and MEGNO maps are checked for their dynamical variations and resemblance. The HS of most of the planets seems to be stable, quasi-periodic for at least ten million years. The MLE and the MEGNO maps also indicate the local quasi-periodicity and global stability in relatively short integration period. The HS criterion is found to be a comparably efficient tool to measure the stability of planetary orbits.
Dama, James F; Rotskoff, Grant; Parrinello, Michele; Voth, Gregory A
2014-09-09
Well-tempered metadynamics has proven to be a practical and efficient adaptive enhanced sampling method for the computational study of biomolecular and materials systems. However, choosing its tunable parameter can be challenging and requires balancing a trade-off between fast escape from local metastable states and fast convergence of an overall free energy estimate. In this article, we present a new smoothly convergent variant of metadynamics, transition-tempered metadynamics, that removes that trade-off and is more robust to changes in its own single tunable parameter, resulting in substantial speed and accuracy improvements. The new method is specifically designed to study state-to-state transitions in which the states of greatest interest are known ahead of time, but transition mechanisms are not. The design is guided by a picture of adaptive enhanced sampling as a means to increase dynamical connectivity of a model's state space until percolation between all points of interest is reached, and it uses the degree of dynamical percolation to automatically tune the convergence rate. We apply the new method to Brownian dynamics on 48 random 1D surfaces, blocked alanine dipeptide in vacuo, and aqueous myoglobin, finding that transition-tempered metadynamics substantially and reproducibly improves upon well-tempered metadynamics in terms of first barrier crossing rate, convergence rate, and robustness to the choice of tuning parameter. Moreover, the trade-off between first barrier crossing rate and convergence rate is eliminated: the new method drives escape from an initial metastable state as fast as metadynamics without tempering, regardless of tuning.
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Exponential Growth and the Shifting Global Center of Gravity of Science Production, 1900-2011
ERIC Educational Resources Information Center
Zhang, Liang; Powell, Justin J. W.; Baker, David P.
2015-01-01
Long historical trends in scientific discovery led mid-20th century scientometricians to mark the advent of "big science"--extensive science production--and predicted that over the next few decades, the exponential growth would slow, resulting in lower rates of increase in production at the upper limit of a logistic curve. They were…
A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
A Multistrategy Optimization Improved Artificial Bee Colony Algorithm
Liu, Wen
2014-01-01
Being prone to the shortcomings of premature and slow convergence rate of artificial bee colony algorithm, an improved algorithm was proposed. Chaotic reverse learning strategies were used to initialize swarm in order to improve the global search ability of the algorithm and keep the diversity of the algorithm; the similarity degree of individuals of the population was used to characterize the diversity of population; population diversity measure was set as an indicator to dynamically and adaptively adjust the nectar position; the premature and local convergence were avoided effectively; dual population search mechanism was introduced to the search stage of algorithm; the parallel search of dual population considerably improved the convergence rate. Through simulation experiments of 10 standard testing functions and compared with other algorithms, the results showed that the improved algorithm had faster convergence rate and the capacity of jumping out of local optimum faster. PMID:24982924
Are there ergodic limits to evolution? Ergodic exploration of genome space and convergence
McLeish, Tom C. B.
2015-01-01
We examine the analogy between evolutionary dynamics and statistical mechanics to include the fundamental question of ergodicity—the representative exploration of the space of possible states (in the case of evolution this is genome space). Several properties of evolutionary dynamics are identified that allow a generalization of the ergodic dynamics, familiar in dynamical systems theory, to evolution. Two classes of evolved biological structure then arise, differentiated by the qualitative duration of their evolutionary time scales. The first class has an ergodicity time scale (the time required for representative genome exploration) longer than available evolutionary time, and has incompletely explored the genotypic and phenotypic space of its possibilities. This case generates no expectation of convergence to an optimal phenotype or possibility of its prediction. The second, more interesting, class exhibits an evolutionary form of ergodicity—essentially all of the structural space within the constraints of slower evolutionary variables have been sampled; the ergodicity time scale for the system evolution is less than the evolutionary time. In this case, some convergence towards similar optima may be expected for equivalent systems in different species where both possess ergodic evolutionary dynamics. When the fitness maximum is set by physical, rather than co-evolved, constraints, it is additionally possible to make predictions of some properties of the evolved structures and systems. We propose four structures that emerge from evolution within genotypes whose fitness is induced from their phenotypes. Together, these result in an exponential speeding up of evolution, when compared with complete exploration of genomic space. We illustrate a possible case of application and a prediction of convergence together with attaining a physical fitness optimum in the case of invertebrate compound eye resolution. PMID:26640648
Are there ergodic limits to evolution? Ergodic exploration of genome space and convergence.
McLeish, Tom C B
2015-12-06
We examine the analogy between evolutionary dynamics and statistical mechanics to include the fundamental question of ergodicity-the representative exploration of the space of possible states (in the case of evolution this is genome space). Several properties of evolutionary dynamics are identified that allow a generalization of the ergodic dynamics, familiar in dynamical systems theory, to evolution. Two classes of evolved biological structure then arise, differentiated by the qualitative duration of their evolutionary time scales. The first class has an ergodicity time scale (the time required for representative genome exploration) longer than available evolutionary time, and has incompletely explored the genotypic and phenotypic space of its possibilities. This case generates no expectation of convergence to an optimal phenotype or possibility of its prediction. The second, more interesting, class exhibits an evolutionary form of ergodicity-essentially all of the structural space within the constraints of slower evolutionary variables have been sampled; the ergodicity time scale for the system evolution is less than the evolutionary time. In this case, some convergence towards similar optima may be expected for equivalent systems in different species where both possess ergodic evolutionary dynamics. When the fitness maximum is set by physical, rather than co-evolved, constraints, it is additionally possible to make predictions of some properties of the evolved structures and systems. We propose four structures that emerge from evolution within genotypes whose fitness is induced from their phenotypes. Together, these result in an exponential speeding up of evolution, when compared with complete exploration of genomic space. We illustrate a possible case of application and a prediction of convergence together with attaining a physical fitness optimum in the case of invertebrate compound eye resolution.
Short‐term time step convergence in a climate model
Rasch, Philip J.; Taylor, Mark A.; Jablonowski, Christiane
2015-01-01
Abstract This paper evaluates the numerical convergence of very short (1 h) simulations carried out with a spectral‐element (SE) configuration of the Community Atmosphere Model version 5 (CAM5). While the horizontal grid spacing is fixed at approximately 110 km, the process‐coupling time step is varied between 1800 and 1 s to reveal the convergence rate with respect to the temporal resolution. Special attention is paid to the behavior of the parameterized subgrid‐scale physics. First, a dynamical core test with reduced dynamics time steps is presented. The results demonstrate that the experimental setup is able to correctly assess the convergence rate of the discrete solutions to the adiabatic equations of atmospheric motion. Second, results from full‐physics CAM5 simulations with reduced physics and dynamics time steps are discussed. It is shown that the convergence rate is 0.4—considerably slower than the expected rate of 1.0. Sensitivity experiments indicate that, among the various subgrid‐scale physical parameterizations, the stratiform cloud schemes are associated with the largest time‐stepping errors, and are the primary cause of slow time step convergence. While the details of our findings are model specific, the general test procedure is applicable to any atmospheric general circulation model. The need for more accurate numerical treatments of physical parameterizations, especially the representation of stratiform clouds, is likely common in many models. The suggested test technique can help quantify the time‐stepping errors and identify the related model sensitivities. PMID:27660669
Temporal Stability and Convergent Validity of the Behavior Assessment System for Children.
ERIC Educational Resources Information Center
Merydith, Scott P.
2001-01-01
Assesses the temporal stability and convergent validity of the Behavioral Assessment System for Children (BASC). Teachers and parents rated kindergarten and first-grade students using BASC. Teachers were more stable in rating children's externalizing behaviors and attention problems. Discusses results in terms of the accuracy of information…
Evaluating the Convergence of Muscle Appearance Attitude Measures
ERIC Educational Resources Information Center
Cafri, Guy; Thompson, J. Kevin
2004-01-01
There has been growing interest in the assessment of a muscular appearance. Given the importance of assessing muscle appearance attitudes, the aim of this study was to explore the convergence of the Drive for Muscularity Scale, Somatomorphic Matrix, Contour Drawing Rating Scale, Male Figure Drawings, and the Muscularity Rating Scale. Participants…
Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems
NASA Astrophysics Data System (ADS)
Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding
2007-09-01
In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
Chambless, Dianne L; Sharpless, Brian A; Rodriguez, Dianeth; McCarthy, Kevin S; Milrod, Barbara L; Khalsa, Shabad-Ratan; Barber, Jacques P
2011-12-01
Aims of this study were (a) to summarize the psychometric literature on the Mobility Inventory for Agoraphobia (MIA), (b) to examine the convergent and discriminant validity of the MIA's Avoidance Alone and Avoidance Accompanied rating scales relative to clinical severity ratings of anxiety disorders from the Anxiety Disorders Interview Schedule (ADIS), and (c) to establish a cutoff score indicative of interviewers' diagnosis of agoraphobia for the Avoidance Alone scale. A meta-analytic synthesis of 10 published studies yielded positive evidence for internal consistency and convergent and discriminant validity of the scales. Participants in the present study were 129 people with a diagnosis of panic disorder. Internal consistency was excellent for this sample, α=.95 for AAC and .96 for AAL. When the MIA scales were correlated with interviewer ratings, evidence for convergent and discriminant validity for AAL was strong (convergent r with agoraphobia severity ratings=.63 vs. discriminant rs of .10-.29 for other anxiety disorders) and more modest but still positive for AAC (.54 vs. .01-.37). Receiver operating curve analysis indicated that the optimal operating point for AAL as an indicator of ADIS agoraphobia diagnosis was 1.61, which yielded sensitivity of .87 and specificity of .73. Copyright © 2011. Published by Elsevier Ltd.
Chambless, Dianne L.; Sharpless, Brian A.; Rodriguez, Dianeth; McCarthy, Kevin S.; Milrod, Barbara L.; Khalsa, Shabad-Ratan; Barber, Jacques P.
2012-01-01
Aims of this study were (a) to summarize the psychometric literature on the Mobility Inventory for Agoraphobia (MIA), (b) to examine the convergent and discriminant validity of the MIA’s Avoidance Alone and Avoidance Accompanied rating scales relative to clinical severity ratings of anxiety disorders from the Anxiety Disorders Interview Schedule (ADIS), and (c) to establish a cutoff score indicative of interviewers’ diagnosis of agoraphobia for the Avoidance Alone scale. A meta-analytic synthesis of 10 published studies yielded positive evidence for internal consistency and convergent and discriminant validity of the scales. Participants in the present study were 129 people with a diagnosis of panic disorder. Internal consistency was excellent for this sample, α = .95 for AAC and .96 for AAL. When the MIA scales were correlated with interviewer ratings, evidence for convergent and discriminant validity for AAL was strong (convergent r with agoraphobia severity ratings = .63 vs. discriminant rs of .10-.29 for other anxiety disorders) and more modest but still positive for AAC (.54 vs. .01-.37). Receiver operating curve analysis indicated that the optimal operating point for AAL as an indicator of ADIS agoraphobia diagnosis was 1.61, which yielded sensitivity of .87 and specificity of .73. PMID:22035997
Cognitive Load Reduces Perceived Linguistic Convergence Between Dyads.
Abel, Jennifer; Babel, Molly
2017-09-01
Speech convergence is the tendency of talkers to become more similar to someone they are listening or talking to, whether that person is a conversational partner or merely a voice heard repeating words. To elucidate the nature of the mechanisms underlying convergence, this study uses different levels of task difficulty on speech convergence within dyads collaborating on a task. Dyad members had to build identical LEGO® constructions without being able to see each other's construction, and with each member having half of the instructions required to complete the construction. Three levels of task difficulty were created, with five dyads at each level (30 participants total). Task difficulty was also measured using completion time and error rate. Listeners who heard pairs of utterances from each dyad judged convergence to be occurring in the Easy condition and to a lesser extent in the Medium condition, but not in the Hard condition. Amplitude envelope acoustic similarity analyses of the same utterance pairs showed that convergence occurred in dyads with shorter completion times and lower error rates. Together, these results suggest that while speech convergence is a highly variable behavior, it may occur more in contexts of low cognitive load. The relevance of these results for the current automatic and socially-driven models of convergence is discussed.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
NASA Astrophysics Data System (ADS)
Wan, Ling; Wang, Tao
2017-06-01
We consider the Navier-Stokes equations for compressible heat-conducting ideal polytropic gases in a bounded annular domain when the viscosity and thermal conductivity coefficients are general smooth functions of temperature. A global-in-time, spherically or cylindrically symmetric, classical solution to the initial boundary value problem is shown to exist uniquely and converge exponentially to the constant state as the time tends to infinity under certain assumptions on the initial data and the adiabatic exponent γ. The initial data can be large if γ is sufficiently close to 1. These results are of Nishida-Smoller type and extend the work (Liu et al. (2014) [16]) restricted to the one-dimensional flows.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
Global dynamics of oscillator populations under common noise
NASA Astrophysics Data System (ADS)
Braun, W.; Pikovsky, A.; Matias, M. A.; Colet, P.
2012-07-01
Common noise acting on a population of identical oscillators can synchronize them. We develop a description of this process which is not limited to the states close to synchrony, but provides a global picture of the evolution of the ensembles. The theory is based on the Watanabe-Strogatz transformation, allowing us to obtain closed stochastic equations for the global variables. We show that at the initial stage, the order parameter grows linearly in time, while at the later stages the convergence to synchrony is exponentially fast. Furthermore, we extend the theory to nonidentical ensembles with the Lorentzian distribution of natural frequencies and determine the stationary values of the order parameter in dependence on driving noise and mismatch.
Distributed convex optimisation with event-triggered communication in networked systems
NASA Astrophysics Data System (ADS)
Liu, Jiayun; Chen, Weisheng
2016-12-01
This paper studies the distributed convex optimisation problem over directed networks. Motivated by practical considerations, we propose a novel distributed zero-gradient-sum optimisation algorithm with event-triggered communication. Therefore, communication and control updates just occur at discrete instants when some predefined condition satisfies. Thus, compared with the time-driven distributed optimisation algorithms, the proposed algorithm has the advantages of less energy consumption and less communication cost. Based on Lyapunov approaches, we show that the proposed algorithm makes the system states asymptotically converge to the solution of the problem exponentially fast and the Zeno behaviour is excluded. Finally, simulation example is given to illustrate the effectiveness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Ochsenfeld, Christian; Head-Gordon, Martin
1997-05-01
To exploit the exponential decay found in numerical studies for the density matrix and its derivative with respect to nuclear displacements, we reformulate the coupled perturbed self-consistent field (CPSCF) equations and a quadratically convergent SCF (QCSCF) method for Hartree-Fock and density functional theory within a local density matrix-based scheme. Our D-CPSCF (density matrix-based CPSCF) and D-QCSCF schemes open the way for exploiting sparsity and to achieve asymptotically linear scaling of computational complexity with molecular size ( M), in case of D-CPSCF for all O( M) derivative densities. Furthermore, these methods are even for small molecules strongly competitive to conventional algorithms.
The damped wave equation with unbounded damping
NASA Astrophysics Data System (ADS)
Freitas, Pedro; Siegl, Petr; Tretter, Christiane
2018-06-01
We analyze new phenomena arising in linear damped wave equations on unbounded domains when the damping is allowed to become unbounded at infinity. We prove the generation of a contraction semigroup, study the relation between the spectra of the semigroup generator and the associated quadratic operator function, the convergence of non-real eigenvalues in the asymptotic regime of diverging damping on a subdomain, and we investigate the appearance of essential spectrum on the negative real axis. We further show that the presence of the latter prevents exponential estimates for the semigroup and turns out to be a robust effect that cannot be easily canceled by adding a positive potential. These analytic results are illustrated by examples.
Preszler, Jonathan; Burns, G. Leonard; Litson, Kaylee; Geiser, Christian; Servera, Mateu
2016-01-01
The objective was to determine and compare the trait and state components of oppositional defiant disorder (ODD) symptom reports across multiple informants. Mothers, fathers, primary teachers, and secondary teachers rated the occurrence of the ODD symptoms in 810 Spanish children (55% boys) on two occasions (end first and second grades). Single source latent state-trait (LST) analyses revealed that ODD symptom ratings from all four sources showed more trait (M = 63%) than state residual (M = 37%) variance. A multiple source LST analysis revealed substantial convergent validity of mothers’ and fathers’ trait variance components (M = 68%) and modest convergent validity of state residual variance components (M = 35%). In contrast, primary and secondary teachers showed low convergent validity relative to mothers for trait variance (Ms = 31%, 32%, respectively) and essentially zero convergent validity relative to mothers for state residual variance (Ms = 1%, 3%, respectively). Although ODD symptom ratings reflected slightly more trait- than state-like constructs within each of the four sources separately across occasions, strong convergent validity for the trait variance only occurred within settings (i.e., mothers with fathers; primary with secondary teachers) with the convergent validity of the trait and state residual variance components being low to non-existent across settings. These results suggest that ODD symptom reports are trait-like across time for individual sources with this trait variance, however, only having convergent validity within settings. Implications for assessment of ODD are discussed. PMID:27148784
Rapid computation of directional wellbore drawdown in a confined aquifer via Poisson resummation
NASA Astrophysics Data System (ADS)
Blumenthal, Benjamin J.; Zhan, Hongbin
2016-08-01
We have derived a rapidly computed analytical solution for drawdown caused by a partially or fully penetrating directional wellbore (vertical, horizontal, or slant) via Green's function method. The mathematical model assumes an anisotropic, homogeneous, confined, box-shaped aquifer. Any dimension of the box can have one of six possible boundary conditions: 1) both sides no-flux; 2) one side no-flux - one side constant-head; 3) both sides constant-head; 4) one side no-flux; 5) one side constant-head; 6) free boundary conditions. The solution has been optimized for rapid computation via Poisson Resummation, derivation of convergence rates, and numerical optimization of integration techniques. Upon application of the Poisson Resummation method, we were able to derive two sets of solutions with inverse convergence rates, namely an early-time rapidly convergent series (solution-A) and a late-time rapidly convergent series (solution-B). From this work we were able to link Green's function method (solution-B) back to image well theory (solution-A). We then derived an equation defining when the convergence rate between solution-A and solution-B is the same, which we termed the switch time. Utilizing the more rapidly convergent solution at the appropriate time, we obtained rapid convergence at all times. We have also shown that one may simplify each of the three infinite series for the three-dimensional solution to 11 terms and still maintain a maximum relative error of less than 10-14.
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yamada, Masaki; ICRR, University of Tokyo, Kashiwa, 277-8582
2014-02-03
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper,more » we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.« less
Decay rates of Gaussian-type I-balls and Bose-enhancement effects in 3+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawasaki, Masahiro; Yamada, Masaki, E-mail: kawasaki@icrr.u-tokyo.ac.jp, E-mail: yamadam@icrr.u-tokyo.ac.jp
2014-02-01
I-balls/oscillons are long-lived spatially localized lumps of a scalar field which may be formed after inflation. In the scalar field theory with monomial potential nearly and shallower than quadratic, which is motivated by chaotic inflationary models and supersymmetric theories, the scalar field configuration of I-balls is approximately Gaussian. If the I-ball interacts with another scalar field, the I-ball eventually decays into radiation. Recently, it was pointed out that the decay rate of I-balls increases exponentially by the effects of Bose enhancement under some conditions and a non-perturbative method to compute the exponential growth rate has been derived. In this paper,more » we apply the method to the Gaussian-type I-ball in 3+1 dimensions assuming spherical symmetry, and calculate the partial decay rates into partial waves, labelled by the angular momentum of daughter particles. We reveal the conditions that the I-ball decays exponentially, which are found to depend on the mass and angular momentum of daughter particles and also be affected by the quantum uncertainty in the momentum of daughter particles.« less
NASA Astrophysics Data System (ADS)
Sravanthi, C. S.; Gorla, R. S. R.
2018-02-01
The aim of this paper is to study the effects of chemical reaction and heat source/sink on a steady MHD (magnetohydrodynamic) two-dimensional mixed convective boundary layer flow of a Maxwell nanofluid over a porous exponentially stretching sheet in the presence of suction/blowing. Convective boundary conditions of temperature and nanoparticle concentration are employed in the formulation. Similarity transformations are used to convert the governing partial differential equations into non-linear ordinary differential equations. The resulting non-linear system has been solved analytically using an efficient technique, namely: the homotopy analysis method (HAM). Expressions for velocity, temperature and nanoparticle concentration fields are developed in series form. Convergence of the constructed solution is verified. A comparison is made with the available results in the literature and our results are in very good agreement with the known results. The obtained results are presented through graphs for several sets of values of the parameters and salient features of the solutions are analyzed. Numerical values of the local skin-friction, Nusselt number and nanoparticle Sherwood number are computed and analyzed.
NASA Astrophysics Data System (ADS)
Lopes, Sílvia R. C.; Prass, Taiane S.
2014-05-01
Here we present a theoretical study on the main properties of Fractionally Integrated Exponential Generalized Autoregressive Conditional Heteroskedastic (FIEGARCH) processes. We analyze the conditions for the existence, the invertibility, the stationarity and the ergodicity of these processes. We prove that, if { is a FIEGARCH(p,d,q) process then, under mild conditions, { is an ARFIMA(q,d,0) with correlated innovations, that is, an autoregressive fractionally integrated moving average process. The convergence order for the polynomial coefficients that describes the volatility is presented and results related to the spectral representation and to the covariance structure of both processes { and { are discussed. Expressions for the kurtosis and the asymmetry measures for any stationary FIEGARCH(p,d,q) process are also derived. The h-step ahead forecast for the processes {, { and { are given with their respective mean square error of forecast. The work also presents a Monte Carlo simulation study showing how to generate, estimate and forecast based on six different FIEGARCH models. The forecasting performance of six models belonging to the class of autoregressive conditional heteroskedastic models (namely, ARCH-type models) and radial basis models is compared through an empirical application to Brazilian stock market exchange index.
NASA Astrophysics Data System (ADS)
Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.
2014-12-01
We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.
Sulfur poisoning of hydrocarbon oxidation by palladium. M.S. Thesis
NASA Technical Reports Server (NTRS)
Baumgartner, A. J.
1975-01-01
Using a differential bed recycle reactor the oxidation of ethane and diethyl ketone by a Pd catalyst was studied at the 0-30 ppm level in air. In both cases first order kinetics were observed. The ethane oxidation rate was characterized n the Arrhenius form by a pre-exponential of 1.0 x 10 to the 8th power cm/sec and an E sub a of 27 kcal/mole. The diethyl ketone oxidation rate was characterized by a pre-exponential of 5.7 x -1000 cm/sec and E sub a of 14 kcal/mole. Poisoning of ethan oxidation was also investigated by hydrogen sulfide and to a smaller extent by the refrigerants Freon 22 and Gentron 142-B. Poisoning by Gentron 142-B was much more severe than by hydrogen sulfide. Kinetic experiments indicated that only the pre-exponential was changing.
A globally convergent MC algorithm with an adaptive learning rate.
Peng, Dezhong; Yi, Zhang; Xiang, Yong; Zhang, Haixian
2012-02-01
This brief deals with the problem of minor component analysis (MCA). Artificial neural networks can be exploited to achieve the task of MCA. Recent research works show that convergence of neural networks based MCA algorithms can be guaranteed if the learning rates are less than certain thresholds. However, the computation of these thresholds needs information about the eigenvalues of the autocorrelation matrix of data set, which is unavailable in online extraction of minor component from input data stream. In this correspondence, we introduce an adaptive learning rate into the OJAn MCA algorithm, such that its convergence condition does not depend on any unobtainable information, and can be easily satisfied in practical applications.
Jiaxi Wang; Guolei Li; Jeremiah R. Pinto; Jiajia Liu; Wenhui Shi; Yong Liu
2015-01-01
Optimum fertilization levels are often determined solely from nursery growth responses. However, it is the performance of the seedling on the outplanting site that is the most important. For Pinus species seedlings, little information is known about the field performance of plants cultured with different nutrient rates, especially with exponential fertilization. In...
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
A new look at the convergence of a famous sequence
NASA Astrophysics Data System (ADS)
Dobrescu, Mihaela
2010-12-01
A new proof for the monotonicity of the sequence ? is given as a special case of a large family of monotomic and bounded, hence convergent sequences. The new proof is based on basic calculus results rather than induction, which makes it accessible to a larger audience including business and life sciences students and faculty. The slow rate of convergence of the two sequences is also discussed, and convergence bounds are found.
NASA Astrophysics Data System (ADS)
Niki, Hiroshi; Harada, Kyouji; Morimoto, Munenori; Sakakihara, Michio
2004-03-01
Several preconditioned iterative methods reported in the literature have been used for improving the convergence rate of the Gauss-Seidel method. In this article, on the basis of nonnegative matrix, comparisons between some splittings for such preconditioned matrices are derived. Simple numerical examples are also given.
Operator induced multigrid algorithms using semirefinement
NASA Technical Reports Server (NTRS)
Decker, Naomi; Vanrosendale, John
1989-01-01
A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
Assessments of astronaut effectiveness
NASA Technical Reports Server (NTRS)
Rose, Robert M.; Helmreich, Robert L.; Fogg, Louis; Mcfadden, Terry J.
1993-01-01
This study examined the reliability and convergent validity of three methods of peer and supervisory ratings of the effectiveness of individual NASA astronauts and their relationships with flight assignments. These two techniques were found to be reliable and relatively convergent. Seniority and a peer-rated Performance and Competence factor proved to be most closely associated with flight assignments, while supervisor ratings and a peer-rated Group Living and Personality factor were found to be unrelated. Results have implications for the selection and training of astronauts.
Liang, X B; Wang, J
2000-01-01
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.
Klein, F.W.; Wright, Tim
2008-01-01
The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
Hundreds of Genes Experienced Convergent Shifts in Selective Pressure in Marine Mammals
Chikina, Maria; Robinson, Joseph D.; Clark, Nathan L.
2016-01-01
Abstract Mammal species have made the transition to the marine environment several times, and their lineages represent one of the classical examples of convergent evolution in morphological and physiological traits. Nevertheless, the genetic mechanisms of their phenotypic transition are poorly understood, and investigations into convergence at the molecular level have been inconclusive. While past studies have searched for convergent changes at specific amino acid sites, we propose an alternative strategy to identify those genes that experienced convergent changes in their selective pressures, visible as changes in evolutionary rate specifically in the marine lineages. We present evidence of widespread convergence at the gene level by identifying parallel shifts in evolutionary rate during three independent episodes of mammalian adaptation to the marine environment. Hundreds of genes accelerated their evolutionary rates in all three marine mammal lineages during their transition to aquatic life. These marine-accelerated genes are highly enriched for pathways that control recognized functional adaptations in marine mammals, including muscle physiology, lipid-metabolism, sensory systems, and skin and connective tissue. The accelerations resulted from both adaptive evolution as seen in skin and lung genes, and loss of function as in gustatory and olfactory genes. In regard to sensory systems, this finding provides further evidence that reduced senses of taste and smell are ubiquitous in marine mammals. Our analysis demonstrates the feasibility of identifying genes underlying convergent organism-level characteristics on a genome-wide scale and without prior knowledge of adaptations, and provides a powerful approach for investigating the physiological functions of mammalian genes. PMID:27329977
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-01
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857
An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.
Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo
2018-01-13
The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.
Analysis of Online Composite Mirror Descent Algorithm.
Lei, Yunwen; Zhou, Ding-Xuan
2017-03-01
We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1987-01-01
The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
An implicit iterative algorithm with a tuning parameter for Itô Lyapunov matrix equations
NASA Astrophysics Data System (ADS)
Zhang, Ying; Wu, Ai-Guo; Sun, Hui-Jie
2018-01-01
In this paper, an implicit iterative algorithm is proposed for solving a class of Lyapunov matrix equations arising in Itô stochastic linear systems. A tuning parameter is introduced in this algorithm, and thus the convergence rate of the algorithm can be changed. Some conditions are presented such that the developed algorithm is convergent. In addition, an explicit expression is also derived for the optimal tuning parameter, which guarantees that the obtained algorithm achieves its fastest convergence rate. Finally, numerical examples are employed to illustrate the effectiveness of the given algorithm.
The convergence of health care financing structures: empirical evidence from OECD-countries.
Leiter, Andrea M; Theurl, Engelbert
2012-02-01
The convergence/divergence of health care systems between countries is an interesting facet of the health care system research from a macroeconomic perspective. In this paper, we concentrate on an important dimension of every health care system, namely the convergence/divergence of health care financing (HCF). Based on data from 22 OECD countries in the time period 1970-2005, we use the public financing ratio (public financing in % of total HCF) and per capita public HCF as indicators for convergence. By applying different concepts of convergence, we find that HCF is converging. This conclusion also holds when we look at smaller subgroups of countries and shorter time periods. However, we find evidence that countries do not move towards a common mean and that the rate of convergence is decreasing over time.
Li, Shao-Peng; Cadotte, Marc W; Meiners, Scott J; Pu, Zhichao; Fukami, Tadashi; Jiang, Lin
2016-09-01
Whether plant communities in a given region converge towards a particular stable state during succession has long been debated, but rarely tested at a sufficiently long time scale. By analysing a 50-year continuous study of post-agricultural secondary succession in New Jersey, USA, we show that the extent of community convergence varies with the spatial scale and species abundance classes. At the larger field scale, abundance-based dissimilarities among communities decreased over time, indicating convergence of dominant species, whereas incidence-based dissimilarities showed little temporal tend, indicating no sign of convergence. In contrast, plots within each field diverged in both species composition and abundance. Abundance-based successional rates decreased over time, whereas rare species and herbaceous plants showed little change in temporal turnover rates. Initial abandonment conditions only influenced community structure early in succession. Overall, our findings provide strong evidence for scale and abundance dependence of stochastic and deterministic processes over old-field succession. © 2016 John Wiley & Sons Ltd/CNRS.
A linear recurrent kernel online learning algorithm with sparse updates.
Fan, Haijin; Song, Qing
2014-02-01
In this paper, we propose a recurrent kernel algorithm with selectively sparse updates for online learning. The algorithm introduces a linear recurrent term in the estimation of the current output. This makes the past information reusable for updating of the algorithm in the form of a recurrent gradient term. To ensure that the reuse of this recurrent gradient indeed accelerates the convergence speed, a novel hybrid recurrent training is proposed to switch on or off learning the recurrent information according to the magnitude of the current training error. Furthermore, the algorithm includes a data-dependent adaptive learning rate which can provide guaranteed system weight convergence at each training iteration. The learning rate is set as zero when the training violates the derived convergence conditions, which makes the algorithm updating process sparse. Theoretical analyses of the weight convergence are presented and experimental results show the good performance of the proposed algorithm in terms of convergence speed and estimation accuracy. Copyright © 2013 Elsevier Ltd. All rights reserved.
A New Labor Theory of Value for Rational Planning Through Use of the Bourgeois Profit Rate
Weizsäcker, C. C. Von; Samuelson, Paul A.
1971-01-01
To maximaze steady-state per capita consumptions, goods should be valued at their “synchronized labor requirement costs”, which are shown to deviate from Marx's schemata of “values” but to coincide with bourgeois prices calculated at dated labor requirements, marked-up by compound interest, at a profit or interest rate equal to the system's rate of exponential growth. With capitalists saving all their incomes for future profits, workers get all there is to get. Departures from such an exogenous, or endogenous, golden-rule state are the rule in history rather than the exception. In the case of exponential labor-augmenting change, it is shown that competitive prices will equal historically embodied labor content. PMID:16591926
Thermally induced spin rate ripple on spacecraft with long radial appendages
NASA Technical Reports Server (NTRS)
Fedor, J. V.
1983-01-01
A thermally induced spin rate ripple hypothesis is proposed to explain the spin rate anomaly observed on ISEE-B. It involves the two radial 14.5 meter beryllium copper tape ribbons going in and out of the spacecraft hub shadow. A thermal lag time constant is applied to the thermally induced ribbon displacements which perturb the spin rate. It is inferred that the averaged thermally induced ribbon displacements are coupled to the ribbon angular motion. A possible exponential build up of the inplane motion of the ribbon which in turn causes the spin rate ripple, ultimately limited by damping in the ribbon and spacecraft is shown. It is indicated that qualitative increase in the oscillation period and the thermal lag is fundamental for the period increase. found that numerical parameter values required to agree with in orbit initial exponential build up are reasonable; those required for the ripple period are somewhat extreme.
Convergence analysis of a monotonic penalty method for American option pricing
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Xiaoqi; Teo, Kok Lay
2008-12-01
This paper is devoted to study the convergence analysis of a monotonic penalty method for pricing American options. A monotonic penalty method is first proposed to solve the complementarity problem arising from the valuation of American options, which produces a nonlinear degenerated parabolic PDE with Black-Scholes operator. Based on the variational theory, the solvability and convergence properties of this penalty approach are established in a proper infinite dimensional space. Moreover, the convergence rate of the combination of two power penalty functions is obtained.
Successful technical trading agents using genetic programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Othling, Andrew S.; Kelly, John A.; Pryor, Richard J.
2004-10-01
Genetic programming (GP) has proved to be a highly versatile and useful tool for identifying relationships in data for which a more precise theoretical construct is unavailable. In this project, we use a GP search to develop trading strategies for agent based economic models. These strategies use stock prices and technical indicators, such as the moving average convergence/divergence and various exponentially weighted moving averages, to generate buy and sell signals. We analyze the effect of complexity constraints on the strategies as well as the relative performance of various indicators. We also present innovations in the classical genetic programming algorithm thatmore » appear to improve convergence for this problem. Technical strategies developed by our GP algorithm can be used to control the behavior of agents in economic simulation packages, such as ASPEN-D, adding variety to the current market fundamentals approach. The exploitation of arbitrage opportunities by technical analysts may help increase the efficiency of the simulated stock market, as it does in the real world. By improving the behavior of simulated stock markets, we can better estimate the effects of shocks to the economy due to terrorism or natural disasters.« less
NASA Astrophysics Data System (ADS)
Berk, Alexander
2013-03-01
Exact expansions for Voigt line-shape total, line-tail and spectral bin equivalent widths and for Voigt finite spectral bin single-line transmittances have been derived in terms of optical depth dependent exponentially-scaled modified Bessel functions of integer order and optical depth independent Fourier integral coefficients. The series are convergent for the full range of Voigt line-shapes, from pure Doppler to pure Lorentzian. In the Lorentz limit, the expansion reduces to the Ladenburg and Reiche function for the total equivalent width. Analytic expressions are derived for the first 8 Fourier coefficients for pure Lorentzian lines, for pure Doppler lines and for Voigt lines with at most moderate Doppler dependence. A strong-line limit sum rule on the Fourier coefficients is enforced to define an additional Fourier coefficient and to optimize convergence of the truncated expansion. The moderate Doppler dependence scenario is applicable to and has been implemented in the MODTRAN5 atmospheric band model radiative transfer software. Finite-bin transmittances computed with the truncated expansions reduce transmittance residuals compared to the former Rodgers-Williams equivalent width based approach by ∼2 orders of magnitude.
Virial Coefficients and Equations of State for Hard Polyhedron Fluids.
Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C
2017-10-24
Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.
Coriani, Sonia; Høst, Stinne; Jansík, Branislav; Thøgersen, Lea; Olsen, Jeppe; Jørgensen, Poul; Reine, Simen; Pawłowski, Filip; Helgaker, Trygve; Sałek, Paweł
2007-04-21
A linear-scaling implementation of Hartree-Fock and Kohn-Sham self-consistent field theories for the calculation of frequency-dependent molecular response properties and excitation energies is presented, based on a nonredundant exponential parametrization of the one-electron density matrix in the atomic-orbital basis, avoiding the use of canonical orbitals. The response equations are solved iteratively, by an atomic-orbital subspace method equivalent to that of molecular-orbital theory. Important features of the subspace method are the use of paired trial vectors (to preserve the algebraic structure of the response equations), a nondiagonal preconditioner (for rapid convergence), and the generation of good initial guesses (for robust solution). As a result, the performance of the iterative method is the same as in canonical molecular-orbital theory, with five to ten iterations needed for convergence. As in traditional direct Hartree-Fock and Kohn-Sham theories, the calculations are dominated by the construction of the effective Fock/Kohn-Sham matrix, once in each iteration. Linear complexity is achieved by using sparse-matrix algebra, as illustrated in calculations of excitation energies and frequency-dependent polarizabilities of polyalanine peptides containing up to 1400 atoms.
NASA Astrophysics Data System (ADS)
Dudar, O. I.; Dudar, E. S.
2017-11-01
The features of application of the 1D dimensional finite element method (FEM) in combination with the laminar solutions method (LSM) for the calculation of underground ventilating networks are considered. In this case the processes of heat and mass transfer change the properties of a fluid (binary vapour-air mix). Under the action of gravitational forces it leads to such phenomena as natural draft, local circulation, etc. The FEM relations considering the action of gravity, the mass conservation law, the dependence of vapour-air mix properties on the thermodynamic parameters are derived so that it allows one to model the mentioned phenomena. The analogy of the elastic and plastic rod deformation processes to the processes of laminar and turbulent flow in a pipe is described. Owing to this analogy, the guaranteed convergence of the elastic solutions method for the materials of plastic type means the guaranteed convergence of the LSM for any regime of a turbulent flow in a rough pipe. By means of numerical experiments the convergence rate of the FEM - LSM is investigated. This convergence rate appeared much higher than the convergence rate of the Cross - Andriyashev method. Data of other authors on the convergence rate comparison for the finite element method, the Newton method and the method of gradient are provided. These data allow one to conclude that the FEM in combination with the LSM is one of the most effective methods of calculation of hydraulic and ventilating networks. The FEM - LSM has been used for creation of the research application programme package “MineClimate” allowing to calculate the microclimate parameters in the underground ventilating networks.
NASA Astrophysics Data System (ADS)
Vernant, P.; Bilham, R.; Szeliga, W.; Drupka, D.; Kalita, S.; Bhattacharyya, A. K.; Gaur, V. K.; Pelgay, P.; Cattin, R.; Berthet, T.
2014-08-01
GPS data reveal that the Brahmaputra Valley has broken from the Indian Plate and rotates clockwise relative to India about a point a few hundred kilometers west of the Shillong Plateau. The GPS velocity vectors define two distinct blocks separated by the Kopili fault upon which 2-3 mm/yr of dextral slip is observed: the Shillong block between longitudes 89 and 93°E rotating clockwise at 1.15°/Myr and the Assam block from 93.5°E to 97°E rotating at ≈1.13°/Myr. These two blocks are more than 120 km wide in a north-south sense, but they extend locally a similar distance beneath the Himalaya and Tibet. A result of these rotations is that convergence across the Himalaya east of Sikkim decreases in velocity eastward from 18 to ≈12 mm/yr and convergence between the Shillong Plateau and Bangladesh across the Dauki fault increases from 3 mm/yr in the west to >8 mm/yr in the east. This fast convergence rate is inconsistent with inferred geological uplift rates on the plateau (if a 45°N dip is assumed for the Dauki fault) unless clockwise rotation of the Shillong block has increased substantially in the past 4-8 Myr. Such acceleration is consistent with the reported recent slowing in the convergence rate across the Bhutan Himalaya. The current slip potential near Bhutan, based on present-day convergence rates and assuming no great earthquake since 1713 A.D., is now ~5.4 m, similar to the slip reported from alluvial terraces that offsets across the Main Himalayan Thrust and sufficient to sustain a Mw ≥ 8.0 earthquake in this area.
Rapid growth of seed black holes in the early universe by supra-exponential accretion.
Alexander, Tal; Natarajan, Priyamvada
2014-09-12
Mass accretion by black holes (BHs) is typically capped at the Eddington rate, when radiation's push balances gravity's pull. However, even exponential growth at the Eddington-limited e-folding time t(E) ~ few × 0.01 billion years is too slow to grow stellar-mass BH seeds into the supermassive luminous quasars that are observed when the universe is 1 billion years old. We propose a dynamical mechanism that can trigger supra-exponential accretion in the early universe, when a BH seed is bound in a star cluster fed by the ubiquitous dense cold gas flows. The high gas opacity traps the accretion radiation, while the low-mass BH's random motions suppress the formation of a slowly draining accretion disk. Supra-exponential growth can thus explain the puzzling emergence of supermassive BHs that power luminous quasars so soon after the Big Bang. Copyright © 2014, American Association for the Advancement of Science.
Cai, Zuowei; Huang, Lihong; Zhang, Lingling
2015-05-01
This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.
Galland, Paul
2002-09-01
The quantitative relation between gravitropism and phototropism was analyzed for light-grown coleoptiles of Avena sativa (L.). With respect to gravitropism the coleoptiles obeyed the sine law. To study the interaction between light and gravity, coleoptiles were inclined at variable angles and irradiated for 7 h with unilateral blue light (466 nm) impinging at right angles relative to the axis of the coleoptile. The phototropic stimulus was applied from the side opposite to the direction of gravitropic bending. The fluence rate that was required to counteract the negative gravitropism increased exponentially with the sine of the inclination angle. To achieve balance, a linear increase in the gravitropic stimulus required compensation by an exponential increase in the counteracting phototropic stimulus. The establishment of photogravitropic equilibrium during continuous unilateral irradiation is thus determined by two different laws: the well-known sine law for gravitropism and a novel exponential law for phototropism described in this work.
ERIC Educational Resources Information Center
Chermahini, Soghra Akbari; Hommel, Bernhard
2010-01-01
Human creativity has been claimed to rely on the neurotransmitter dopamine, but evidence is still sparse. We studied whether individual performance (N=117) in divergent thinking (alternative uses task) and convergent thinking (remote association task) can be predicted by the individual spontaneous eye blink rate (EBR), a clinical marker of…
Burns, G Leonard; Walsh, James A; Servera, Mateu; Lorenzo-Seva, Urbano; Cardo, Esther; Rodríguez-Fornells, Antoni
2013-01-01
Exploratory structural equation modeling (SEM) was applied to a multiple indicator (26 individual symptom ratings) by multitrait (ADHD-IN, ADHD-HI and ODD factors) by multiple source (mothers, fathers and teachers) model to test the invariance, convergent and discriminant validity of the Child and Adolescent Disruptive Behavior Inventory with 872 Thai adolescents and the ADHD Rating Scale-IV and ODD scale of the Disruptive Behavior Inventory with 1,749 Spanish children. Most of the individual ADHD/ODD symptoms showed convergent and discriminant validity with the loadings and thresholds being invariant over mothers, fathers and teachers in both samples (the three latent factor means were higher for parents than teachers). The ADHD-IN, ADHD-HI and ODD latent factors demonstrated convergent and discriminant validity between mothers and fathers within the two samples. Convergent and discriminant validity between parents and teachers for the three factors was either absent (Thai sample) or only partial (Spanish sample). The application of exploratory SEM to a multiple indicator by multitrait by multisource model should prove useful for the evaluation of the construct validity of the forthcoming DSM-V ADHD/ODD rating scales.
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Efficiency of quantum vs. classical annealing in nonconvex learning problems
Zecchina, Riccardo
2018-01-01
Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodwin, D. L.; Kuprov, Ilya, E-mail: i.kuprov@soton.ac.uk
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrixmore » exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.« less
Leader-following control of multiple nonholonomic systems over directed communication graphs
NASA Astrophysics Data System (ADS)
Dong, Wenjie; Djapic, Vladimir
2016-06-01
This paper considers the leader-following control problem of multiple nonlinear systems with directed communication topology and a leader. If the state of each system is measurable, distributed state feedback controllers are proposed using neighbours' state information with the aid of Lyapunov techniques and properties of Laplacian matrix for time-invariant communication graph and time-varying communication graph. It is shown that the state of each system exponentially converges to the state of a leader. If the state of each system is not measurable, distributed observer-based output feedback control laws are proposed. As an application of the proposed results, formation control of wheeled mobile robots is studied. The simulation results show the effectiveness of the proposed results.
Complete stability of delayed recurrent neural networks with Gaussian activation functions.
Liu, Peng; Zeng, Zhigang; Wang, Jun
2017-01-01
This paper addresses the complete stability of delayed recurrent neural networks with Gaussian activation functions. By means of the geometrical properties of Gaussian function and algebraic properties of nonsingular M-matrix, some sufficient conditions are obtained to ensure that for an n-neuron neural network, there are exactly 3 k equilibrium points with 0≤k≤n, among which 2 k and 3 k -2 k equilibrium points are locally exponentially stable and unstable, respectively. Moreover, it concludes that all the states converge to one of the equilibrium points; i.e., the neural networks are completely stable. The derived conditions herein can be easily tested. Finally, a numerical example is given to illustrate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Physical and ecological controllers of the microbial responses to drying and rewetting in soil
NASA Astrophysics Data System (ADS)
Leizeaga, Ainara; Meisner, Annelein; Bååth, Erland; Rousk, Johannes
2017-04-01
Soil moisture is one of the most powerful factors that regulate microbial activity in soil. The variation of moisture leads to drying-rewetting (DRW) events which are known to induce enormous dynamics in soil biogeochemistry; however, the microbial underpinnings are mostly unknown. Rewetting a dry soil can result in two response patterns of bacterial growth. In the Type 1 response, bacteria start growing immediately after rewetting with rates that increase in a linear fashion to converge with those prior to the DRW within hours. This growth response coincides with respiration rates that peak immediately after rewetting to then exponentially decrease. In the Type 2 response, bacterial growth remains very low after rewetting during a lag period of up to 20 hours. Bacteria then increase their growth rates exponentially to much higher rates than those before the DRW event. This growth response coincides with respiration rates that increase to high rates immediately after rewetting that then remain elevated and sometimes even increase further in sync with the growth increase. Previous studies have shown that (i) extended drying (ii) starving before DRW and (iii) inhibitors combined with drought could change the bacterial response from Type 1 to Type 2. This suggested that the response of bacteria upon rewetting could be related to the harshness of the disturbance as experienced by the microbes. In the present study, we set out to study if reduced harshness could change a Type 2 response into a Type 1 response. We hypothesized that (1) a reduced physical harshness of drying and (2) induced tolerance to drying in microbial communities could change a Type 2 response into a Type 1 growth response upon rewetting. To address this, two experiments were performed. First, soils were partially dried to different water contents and bacterial response upon rewetting was measured. Second, soils were exposed to repeated DRW cycles (< 9 cycles) and the bacterial response was followed after rewetting. A less harsh drying (partial drying) of a soil could change the growth responses to rewetting. The lag period decreased with less complete drying to eventually became 0, transitioning from a Type 2 to a Type 1. Even after a Type 1 response was induced, further reduction of harshness could also lead to a faster recovery of growth rates. Our results support the hypothesis: the physical harshness of drying can determine the microbial survival and thus the type of bacterial growth response. Subjecting soil to DRW cycles could also induce a change from a Type 2 to Type 1 growth response. This suggested that there was a community shift towards higher drought-tolerance. Thus, identical physical disturbance was less harsh for a community that has been subjected to more drying rewetting cycles. To predict how the microbial community's control of the soil C budget of ecosystems is affected warming-induced drought, our results demonstrate that both the physical characteristics of the disturbance and the community's tolerance to drought need to be considered.
Socio-Economic Instability and the Scaling of Energy Use with Population Size
DeLong, John P.; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system. PMID:26091499
Socio-Economic Instability and the Scaling of Energy Use with Population Size.
DeLong, John P; Burger, Oskar
2015-01-01
The size of the human population is relevant to the development of a sustainable world, yet the forces setting growth or declines in the human population are poorly understood. Generally, population growth rates depend on whether new individuals compete for the same energy (leading to Malthusian or density-dependent growth) or help to generate new energy (leading to exponential and super-exponential growth). It has been hypothesized that exponential and super-exponential growth in humans has resulted from carrying capacity, which is in part determined by energy availability, keeping pace with or exceeding the rate of population growth. We evaluated the relationship between energy use and population size for countries with long records of both and the world as a whole to assess whether energy yields are consistent with the idea of an increasing carrying capacity. We find that on average energy use has indeed kept pace with population size over long time periods. We also show, however, that the energy-population scaling exponent plummets during, and its temporal variability increases preceding, periods of social, political, technological, and environmental change. We suggest that efforts to increase the reliability of future energy yields may be essential for stabilizing both population growth and the global socio-economic system.
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
Fast reconstruction of high-qubit-number quantum states via low-rate measurements
NASA Astrophysics Data System (ADS)
Li, K.; Zhang, J.; Cong, S.
2017-07-01
Due to the exponential complexity of the resources required by quantum state tomography (QST), people are interested in approaches towards identifying quantum states which require less effort and time. In this paper, we provide a tailored and efficient method for reconstructing mixed quantum states up to 12 (or even more) qubits from an incomplete set of observables subject to noises. Our method is applicable to any pure or nearly pure state ρ and can be extended to many states of interest in quantum information processing, such as a multiparticle entangled W state, Greenberger-Horne-Zeilinger states, and cluster states that are matrix product operators of low dimensions. The method applies the quantum density matrix constraints to a quantum compressive sensing optimization problem and exploits a modified quantum alternating direction multiplier method (quantum-ADMM) to accelerate the convergence. Our algorithm takes 8 ,35 , and 226 seconds, respectively, to reconstruct superposition state density matrices of 10 ,11 ,and12 qubits with acceptable fidelity using less than 1 % of measurements of expectation. To our knowledge it is the fastest realization that people can achieve using a normal desktop. We further discuss applications of this method using experimental data of mixed states obtained in an ion trap experiment of up to 8 qubits.
THE TURBULENT DYNAMO IN HIGHLY COMPRESSIBLE SUPERSONIC PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Federrath, Christoph; Schober, Jennifer; Bovino, Stefano
The turbulent dynamo may explain the origin of cosmic magnetism. While the exponential amplification of magnetic fields has been studied for incompressible gases, little is known about dynamo action in highly compressible, supersonic plasmas, such as the interstellar medium of galaxies and the early universe. Here we perform the first quantitative comparison of theoretical models of the dynamo growth rate and saturation level with three-dimensional magnetohydrodynamical simulations of supersonic turbulence with grid resolutions of up to 1024{sup 3} cells. We obtain numerical convergence and find that dynamo action occurs for both low and high magnetic Prandtl numbers Pm = ν/ηmore » = 0.1-10 (the ratio of viscous to magnetic dissipation), which had so far only been seen for Pm ≥ 1 in supersonic turbulence. We measure the critical magnetic Reynolds number, Rm{sub crit}=129{sub −31}{sup +43}, showing that the compressible dynamo is almost as efficient as in incompressible gas. Considering the physical conditions of the present and early universe, we conclude that magnetic fields need to be taken into account during structure formation from the early to the present cosmic ages, because they suppress gas fragmentation and drive powerful jets and outflows, both greatly affecting the initial mass function of stars.« less
A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix
NASA Technical Reports Server (NTRS)
Shroff, Gautam
1989-01-01
A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
Borelli, Jessica L; Palmer, Alexandra; Vanwoerden, Salome; Sharp, Carla
2017-12-13
Although convergence in parent-youth reports of adolescent psychopathology is critical for treatment planning, research documents a pervasive lack of agreement in ratings of adolescents' symptoms. Attachment insecurity (particularly disorganized attachment) and impoverished reflective functioning (RF) are 2 theoretically implicated predictors of low convergence that have not been examined in the literature. In a cross-sectional investigation of adolescents receiving inpatient psychiatric treatment, we examined whether disorganized attachment and low (adolescent and parent) RF were associated with patterns of convergence in adolescent internalizing and externalizing symptoms. Compared with organized adolescents, disorganized adolescents had lower parent-youth convergence in reports of their internalizing symptoms and higher convergence in reports of their externalizing symptoms; low adolescent self-focused RF was associated with low convergence in parent-adolescent reports of internalizing symptoms, whereas low adolescent global RF was associated with high convergence in parent-adolescent reports of externalizing symptoms. Among adolescents receiving inpatient psychiatric treatment, disorganized attachment and lower RF were associated with weaker internalizing symptom convergence and greater externalizing symptom convergence, which if replicated, could inform assessment strategies and treatment planning in this setting.
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Exploring the Biotic Pump Hypothesis along Non-linear Transects in Tropical South America
NASA Astrophysics Data System (ADS)
Molina, R.; Bettin, D. M.; Salazar, J. F.; Villegas, J. C.
2014-12-01
Forests might actively transport atmospheric moisture from the oceans, according to the biotic pump of atmospheric moisture (BiPAM) hypothesis. The BiPAM hypothesis appears to be supported by the fact that precipitation drops exponentially with distance from ocean along non-forested land transects, but not on their forested counterparts. Yet researchers have discussed the difficulty in defining proper transects for BiPAM studies. Previous studies calculate precipitation gradients either along linear transects maximizing distance to the ocean, or along polylines following specific atmospheric pathways (e.g., aerial rivers). In this study we analyzed precipitation gradients along curvilinear streamlines of wind in tropical South America. Wind streamlines were computed using long-term quarterly averages of meridional and zonal wind components from the ERA-Interim and NCEP/NCAR reanalyses. Total precipitation along streamlines was obtained from four data sources: TRMM, UDEL, ERA-Interim, and NCEP/NCAR. Precipitation on land versus distance from the ocean was analyzed along selected streamlines for each data source. As predicted by BiPAM, precipitation gradients did not decrease exponentially along streamlines in the vicinity of the Amazon forest, but dropped rapidly as distance from the forest increased. Remarkably, precipitation along streamlines in some areas outside the Amazon forest did not decrease exponentially either. This was possibly owing to convergence of moisture conveyed by low level jets (LLJs) in those areas (e.g., streamlines driven by the Caribbean and CHOCO jets on the Pacific coast of Colombia). Significantly, BiPAM held true even along long transects displaying strong sinuosity. In fact, the general conclusions of previous studies remain valid. Yet effects of LLJs on precipitation gradients need to be thoroughly considered in future BiPAM studies.
NASA Astrophysics Data System (ADS)
Zheng, Fu; Lou, Yidong; Gu, Shengfeng; Gong, Xiaopeng; Shi, Chuang
2017-10-01
During past decades, precise point positioning (PPP) has been proven to be a well-known positioning technique for centimeter or decimeter level accuracy. However, it needs long convergence time to get high-accuracy positioning, which limits the prospects of PPP, especially in real-time applications. It is expected that the PPP convergence time can be reduced by introducing high-quality external information, such as ionospheric or tropospheric corrections. In this study, several methods for tropospheric wet delays modeling over wide areas are investigated. A new, improved model is developed, applicable in real-time applications in China. Based on the GPT2w model, a modified parameter of zenith wet delay exponential decay wrt. height is introduced in the modeling of the real-time tropospheric delay. The accuracy of this tropospheric model and GPT2w model in different seasons is evaluated with cross-validation, the root mean square of the zenith troposphere delay (ZTD) is 1.2 and 3.6 cm on average, respectively. On the other hand, this new model proves to be better than the tropospheric modeling based on water-vapor scale height; it can accurately express tropospheric delays up to 10 km altitude, which potentially has benefits in many real-time applications. With the high-accuracy ZTD model, the augmented PPP convergence performance for BeiDou navigation satellite system (BDS) and GPS is evaluated. It shows that the contribution of the high-quality ZTD model on PPP convergence performance has relation with the constellation geometry. As BDS constellation geometry is poorer than GPS, the improvement for BDS PPP is more significant than that for GPS PPP. Compared with standard real-time PPP, the convergence time is reduced by 2-7 and 20-50% for the augmented BDS PPP, while GPS PPP only improves about 6 and 18% (on average), in horizontal and vertical directions, respectively. When GPS and BDS are combined, the geometry is greatly improved, which is good enough to get a reliable PPP solution, the augmentation PPP improves insignificantly comparing with standard PPP.
Calculation of Rate Spectra from Noisy Time Series Data
Voelz, Vincent A.; Pande, Vijay S.
2011-01-01
As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854
ERIC Educational Resources Information Center
Xie, Tongwei
2011-01-01
Purpose: This article aims to analyze inter-provincial disparities of rural education and the convergence rate, and to discuss the effects of compulsory education reform after 2001. Design/methodology/approach: The article estimates the rural average education years and education Gini coefficients of China's 31 provinces (municipalities) beside…
Numerical Computation of Subsonic Conical Diffuser Flows with Nonuniform Turbulent Inlet Conditions
1977-09-01
Gauss - Seidel Point Iteration Method . . . . . . . . . . . . . . . 7.0 FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT...can be solved in several ways. For simplicity, a standard Gauss - Seidel iteration method is used to obtain the solution . The method updates the...FACTORS AFFECTING THE RATE OF CONVERGENCE OF THE POINT ITERATION ,ŘETHOD The advantage of using the Gauss - Seidel point iteration method to
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Depressurization and two-phase flow of water containing high levels of dissolved nitrogen gas
NASA Technical Reports Server (NTRS)
Simoneau, R. J.
1981-01-01
Depressurization of water containing various concentrations of dissolved nitrogen gas was studied. In a nonflow depressurization experiment, water with very high nitrogen content was depressurized at rates from 0.09 to 0.50 MPa per second and a metastable behavior which was a strong function of the depressurization rate was observed. Flow experiments were performed in an axisymmetric, converging diverging nozzle, a two dimensional, converging nozzle with glass sidewalls, and a sharp edge orifice. The converging diverging nozzle exhibited choked flow behavior even at nitrogen concentration levels as low as 4 percent of the saturation level. The flow rates were independent of concentration level. Flow in the two dimensional, converging, visual nozzle appeared to have a sufficient pressure drop at the throat to cause nitrogen to come out of solution, but choking occurred further downstream. The orifice flow motion pictures showed considerable oscillation downstream of the orifice and parallel to the flow. Nitrogen bubbles appeared in the flow at back pressures as high as 3.28 MPa, and the level at which bubbles were no longer visible was a function of nitrogen concentration.
A Critical Study of Agglomerated Multigrid Methods for Diffusion
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2011-01-01
Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.
Simultaneous quaternion estimation (QUEST) and bias determination
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1989-01-01
Tests of a new method for the simultaneous estimation of spacecraft attitude and sensor biases, based on a quaternion estimation algorithm minimizing Wahba's loss function are presented. The new method is compared with a conventional batch least-squares differential correction algorithm. The estimates are based on data from strapdown gyros and star trackers, simulated with varying levels of Gaussian noise for both inertially-fixed and Earth-pointing reference attitudes. Both algorithms solve for the spacecraft attitude and the gyro drift rate biases. They converge to the same estimates at the same rate for inertially-fixed attitude, but the new algorithm converges more slowly than the differential correction for Earth-pointing attitude. The slower convergence of the new method for non-zero attitude rates is believed to be due to the use of an inadequate approximation for a partial derivative matrix. The new method requires about twice the computational effort of the differential correction. Improving the approximation for the partial derivative matrix in the new method is expected to improve its convergence at the cost of increased computational effort.
NASA Astrophysics Data System (ADS)
Yao, Deyin; Lu, Renquan; Xu, Yong; Ren, Hongru
2017-10-01
In this paper, the sliding mode control problem of Markov jump systems (MJSs) with unmeasured state, partly unknown transition rates and random sensor delays is probed. In the practical engineering control, the exact information of transition rates is hard to obtain and the measurement channel is supposed to subject to random sensor delay. Design a Luenberger observer to estimate the unmeasured system state, and an integral sliding mode surface is constructed to ensure the exponential stability of MJSs. A sliding mode controller based on estimator is proposed to drive the system state onto the sliding mode surface and render the sliding mode dynamics exponentially mean-square stable with H∞ performance index. Finally, simulation results are provided to illustrate the effectiveness of the proposed results.
Shiota, T; Jones, M; Teien, D E; Yamada, I; Passafini, A; Ge, S; Sahn, D J
1995-08-01
The aim of the present study was to investigate dynamic changes in the mitral regurgitant orifice using electromagnetic flow probes and flowmeters and the color Doppler flow convergence method. Methods for determining mitral regurgitant orifice areas have been described using flow convergence imaging with a hemispheric isovelocity surface assumption. However, the shape of flow convergence isovelocity surfaces depends on many factors that change during regurgitation. In seven sheep with surgically created mitral regurgitation, 18 hemodynamic states were studied. The aliasing distances of flow convergence were measured at 10 sequential points using two ranges of aliasing velocities (0.20 to 0.32 and 0.56 to 0.72 m/s), and instantaneous flow rates were calculated using the hemispheric assumption. Instantaneous regurgitant areas were determined from the regurgitant flow rates obtained from both electromagnetic flowmeters and flow convergence divided by the corresponding continuous wave velocities. The regurgitant orifice sizes obtained using the electromagnetic flow method usually increased to maximal size in early to midsystole and then decreased in late systole. Patterns of dynamic changes in orifice area obtained by flow convergence were not the same as those delineated by the electromagnetic flow method. Time-averaged regurgitant orifice areas obtained by flow convergence using lower aliasing velocities overestimated the areas obtained by the electromagnetic flow method ([mean +/- SD] 0.27 +/- 0.14 vs. 0.12 +/- 0.06 cm2, p < 0.001), whereas flow convergence, using higher aliasing velocities, estimated the reference areas more reliably (0.15 +/- 0.06 cm2). The electromagnetic flow method studies uniformly demonstrated dynamic change in mitral regurgitant orifice area and suggested limitations of the flow convergence method.
Trends and determinants of weight gains among OECD countries: an ecological study.
Nghiem, S; Vu, X-B; Barnett, A
2018-06-01
Obesity has become a global issue with abundant evidence to indicate that the prevalence of obesity in many nations has increased over time. The literature also reports a strong association between obesity and economic development, but the trend that obesity growth rates may converge over time has not been examined. We propose a conceptual framework and conduct an ecological analysis on the relationship between economic development and weight gain. We also test the hypothesis that weight gain converges among countries over time and examine determinants of weight gains. This is a longitudinal study of 34 Organisation for Economic Cooperation and Development (OECD) countries in the years 1980-2008 using publicly available data. We apply a dynamic economic growth model to test the hypothesis that the rate of weight gains across countries may converge over time. We also investigate the determinants of weight gains using a longitudinal regression tree analysis. We do not find evidence that the growth rates of body weight across countries converged for all countries. However, there were groups of countries in which the growth rates of body weight converge, with five groups for males and seven groups for females. The predicted growth rates of body weight peak when gross domestic product (GDP) per capita reaches US$47,000 for males and US$37,000 for females in OECD countries. National levels of consumption of sugar, fat and alcohol were the most important contributors to national weight gains. National weight gains follow an inverse U-shape curve with economic development. Excessive calorie intake is the main contributor to weight gains. Copyright © 2018 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xing, Yanyuan; Yan, Yubin
2018-03-01
Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.
Li, Xiangrong; Zhao, Xupei; Duan, Xiabin; Wang, Xiaoliang
2015-01-01
It is generally acknowledged that the conjugate gradient (CG) method achieves global convergence--with at most a linear convergence rate--because CG formulas are generated by linear approximations of the objective functions. The quadratically convergent results are very limited. We introduce a new PRP method in which the restart strategy is also used. Moreover, the method we developed includes not only n-step quadratic convergence but also both the function value information and gradient value information. In this paper, we will show that the new PRP method (with either the Armijo line search or the Wolfe line search) is both linearly and quadratically convergent. The numerical experiments demonstrate that the new PRP algorithm is competitive with the normal CG method.
Relative Motion of the Nazca (farallon) and South American Plates Since Late Cretaceous Time
NASA Astrophysics Data System (ADS)
Pardo-Casas, Federico; Molnar, Peter
1987-06-01
By combining reconstructions of the South American and African plates, the African and Antarctic plates, the Antarctic and Pacific plates, and the Pacific and Nazca plates, we calculated the relative positions and history of convergence of the Nazca and South American plates. Despite variations in convergence rates along the Andes, periods of rapid convergence (averaging more than 100 mm/a) between the times of anomalies 21 (49.5 Ma) and 18 (42 Ma) and since anomaly 7 (26 Ma) coincide with two phases of relatively intense tectonic activity in the Peruvian Andes, known as the late Eocene Incaic and Mio-Pliocene Quechua phases. The periods of relatively slow convergence (50 to 55 ± 30 mm/a at the latitude of Peru and less farther south) between the times of anomalies 30-31 (68.5 Ma) and 21 and between those of anomalies 13 (36 Ma) and 7 correlate with periods during which tectonic activity was relatively quiescent. Thus these reconstructions provide quantitative evidence for a correlation of the intensity of tectonic activity in the overriding plate at subduction zones with variations in the convergence rate.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A
2013-01-01
A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.
A statistical approach to quasi-extinction forecasting.
Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric
2007-12-01
Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrício, João, E-mail: joao.patricio@chalmers.se; Kalmykova, Yuliya; Berg, Per E.O.
2015-05-15
Highlights: • Developed MFA method was validated by the national statistics. • Exponential increase of EEE sales leads to increase in integrated battery consumption. • Digital convergence is likely to be a cause for primary batteries consumption decline. • Factors for estimation of integrated batteries in EE are provided. • Sweden reached the collection rates defined by European Union. - Abstract: In this article, a new method based on Material Flow Accounting is proposed to study detailed material flows in battery consumption that can be replicated for other countries. The method uses regularly available statistics on import, industrial production andmore » export of batteries and battery-containing electric and electronic equipment (EEE). To promote method use by other scholars with no access to such data, several empirically results and their trends over time, for different types of batteries occurrence among the EEE types are provided. The information provided by the method can be used to: identify drivers of battery consumption; study the dynamic behavior of battery flows – due to technology development, policies, consumers behavior and infrastructures. The method is exemplified by the study of battery flows in Sweden for years 1996–2013. The batteries were accounted, both in units and weight, as primary and secondary batteries; loose and integrated; by electrochemical composition and share of battery use between different types of EEE. Results show that, despite a fivefold increase in the consumption of rechargeable batteries, they account for only about 14% of total use of portable batteries. Recent increase in digital convergence has resulted in a sharp decline in the consumption of primary batteries, which has now stabilized at a fairly low level. Conversely, the consumption of integrated batteries has increased sharply. In 2013, 61% of the total weight of batteries sold in Sweden was collected, and for the particular case of alkaline manganese dioxide batteries, the value achieved 74%.« less
NASA Astrophysics Data System (ADS)
Sagiya, T.
2004-12-01
Starting from June 26, 2000, an unprecedented seismic activity occurred around the Miyake-jima, Kohzu-shima, and Nii-jima Islands, in the northern Izu islands. This seismic swarm activity was initiated by the volcanic magma intrusion beneath the Miyake-jima volcano. An intrusion of massive (about 1km3) magma caused the seismic swarm activity and magnificent crustal deformation in the surrounding area within about 200km from the source region. After the seismic swarm activity calmed down, we detect a change in crustal displacement rates in the southern Kanto region from daily coordinate solutions of the continuous GPS network. Interestingly, the change appears mostly in the E-W components. Comparison of GPS velocity data for two time periods (1996-200 and 2001-2002) indicate that the westward displacement rate decreased by about 25% (from 23 mm/yr to 17 mm/yr) at Tateyama, the southern tip of the Boso Peninsula. On the other hand, we do not see significant changes in the N-S and vertical components. Continuous monitoring of crustal displacements with GPS has revealed that the post-swarm deformation is now coming back to the pre-swarm steady state. That is, the time series of E-W component show transient curves, converging into the original steady state. The transient curve can be equally well reproduced by an exponential decay or a logarithmic function. The relaxation time for the exponential curve is estimated as about 3 years. One possible explanation for this transient deformation is viscoelastic relaxation. Since the Izu Islands are situated on the oceanic Philippine Sea plate, the upper mantle with a low viscosity would response to the huge stress change cause by the magma intrusion. The other possibility is a change of frictional property on the plate interface between the Philippine Sea and the Pacific plate. Under the southern Kanto area, the subducted Philippine Sea slab leans on the subdcted Pacific slab. Interaction between these two oceanic plates is still not understood well. But the massive dyke intrusion strongly pushed the subducted Philippine Sea slab, changing the frictional status at the bottom of the Philippine Sea plate. Since the motion of the Pacific plate subduction is nearly westward, this idea can be a solution for the observation that only the E-W components are affected.
Simulation evaluation of a speed-guidance law for Harrier approach transitions
NASA Technical Reports Server (NTRS)
Merrick, Vernon K.; Moralez, Ernesto; Stortz, Michael W.; Hardy, Gordon H.; Gerdes, Ronald M.
1991-01-01
An exponential-deceleration speed guidance law is formulated which mimics the technique currently used by Harrier pilots to perform decelerating approaches to a hover. This guidance law was tested along with an existing two-step constant deceleration speed guidance law, using a fixed-base piloted simulator programmed to represent a YAV-8B Harrier. Decelerating approaches to a hover at a predetermined station-keeping point were performed along a straight (-3 deg glideslope) path in headwinds up to 40 knots and turbulence up to 6 ft./sec. Visibility was fixed at one-quarter nautical mile and 100 ft. cloud ceiling. Three Harrier pilots participated in the experiment. Handling qualities with the aircraft equipped with the standard YAV-8B rate damped attitude stability augmentation system were adequate (level 2) using either speed guidance law. However, the exponential deceleration speed guidance law was rated superior to the constant-deceleration speed guidance law by a Cooper-Harper handling qualities rating of about one unit independent of the level of wind and turbulence. Replacing the attitude control system of the YAV-8B with a high fidelity model following attitude flight controller increased the approach accuracy and reduced the pilot workload. With one minor exception, the handling qualities for the approach were rated satisfactory (level 1). It is concluded that the exponential deceleration speed guidance law is the most cost effective.
The Mass-dependent Star Formation Histories of Disk Galaxies: Infall Model Versus Observations
NASA Astrophysics Data System (ADS)
Chang, R. X.; Hou, J. L.; Shen, S. Y.; Shu, C. G.
2010-10-01
We introduce a simple model to explore the star formation histories of disk galaxies. We assume that the disk originate and grows by continuous gas infall. The gas infall rate is parameterized by the Gaussian formula with one free parameter: the infall-peak time tp . The Kennicutt star formation law is adopted to describe how much cold gas turns into stars. The gas outflow process is also considered in our model. We find that, at a given galactic stellar mass M *, the model adopting a late infall-peak time tp results in blue colors, low-metallicity, high specific star formation rate (SFR), and high gas fraction, while the gas outflow rate mainly influences the gas-phase metallicity and star formation efficiency mainly influences the gas fraction. Motivated by the local observed scaling relations, we "construct" a mass-dependent model by assuming that the low-mass galaxy has a later infall-peak time tp and a larger gas outflow rate than massive systems. It is shown that this model can be in agreement with not only the local observations, but also with the observed correlations between specific SFR and galactic stellar mass SFR/M * ~ M * at intermediate redshifts z < 1. Comparison between the Gaussian-infall model and the exponential-infall model is also presented. It shows that the exponential-infall model predicts a higher SFR at early stage and a lower SFR later than that of Gaussian infall. Our results suggest that the Gaussian infall rate may be more reasonable in describing the gas cooling process than the exponential infall rate, especially for low-mass systems.
Park, Jinkyu; McCormick, Sean P.; Chakrabarti, Mrinmoy; Lindahl, Paul A.
2014-01-01
Fermenting cells growing exponentially on rich (YPAD) medium transitioned to a slow-growing state as glucose levels declined and their metabolism shifted to respiration. During exponential growth, Fe import and cell growth rates were matched, affording an approximately invariant cellular Fe concentration. During the transitionary period, the high-affinity Fe import rate declined slower than the cell growth rate declined, causing Fe to accumulate, initially as FeIII oxyhydroxide nanoparticles but eventually as mitochondrial and vacuolar Fe. Once in slow-growth mode, Fe import and cell growth rates were again matched, and the cellular Fe concentration was again approximately invariant. Fermenting cells grown on minimal medium (MM) grew more slowly during exponential phase and transitioned to a true stationary state as glucose levels declined. The Fe concentration of MM cells that just entered stationary state was similar to that of YPAD cells, but MM cells continued to accumulate Fe in stationary state. Fe initially accumulated as nanoparticles and high-spin FeII species, but vacuolar FeIII also eventually accumulated. Surprisingly, Fe-packed 5-day-old MM cells suffered no more ROS damage than younger cells, suggesting that Fe concentration alone does not accurately predict the extent of ROS damage. The mode and rate of growth at the time of harvesting dramatically affected cellular Fe content. A mathematical model of Fe metabolism in a growing cell was developed. The model included Fe import via a regulated high-affinity pathway and an unregulated low-affinity pathway. Fe import from the cytosol into vacuoles and mitochondria, and nanoparticle formation were also included. The model captured essential trafficking behavior, demonstrating that cells regulate Fe import in accordance with their overall growth rate and that they misregulate Fe import when nanoparticles accumulate. The lack of regulation of Fe in yeast is perhaps unique compared to the tight regulation of other cellular metabolites. This phenomenon likely derives from the unique chemistry associated with Fe nanoparticle formation. PMID:24344915
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
NASA Astrophysics Data System (ADS)
Barnes, Philip M.; de Lépinay, Bernard Mercier
1997-11-01
Analysis of seismic reflection profiles, swath bathymetry, side-scan sonar imagery, and sediment samples reveal the three-dimensional structure, morphology, and stratigraphic evolution of the central to southern Hikurangi margin accretionary wedge, which is developing in response to thick trench fill sediment and oblique convergence between the Australian and Pacific plates. A seismic stratigraphy of the trench fill turbidites and frontal part of the wedge is constrained by seismic correlations to an already established stratigraphic succession nearby, by coccolith and foraminifera biostratigraphy of three core and dredge samples, and by estimates of stratigraphic thicknesses and rates of accumulation of compacted sediment. Structural and stratigraphic analyses of the frontal part of the wedge yield quantitative data on the timing of inception of thrust faults and folds, on the growth and mechanics of frontal accretion under variable convergence obliquity, and on the amounts and rates of horizontal shortening. The data place constraints on the partitioning of geological strain across the entire southern Hikurangi margin. The principal deformation front at the toe of the wedge is discontinuous and represented by right-stepping thrust faulted and folded ridges up to 1 km high, which develop initially from discontinuous protothrusts. In the central part of the margin near 41°S, where the convergence obliquity is 50°, orthogonal convergence rate is slow (27 mm/yr), and about 75% of the total 4 km of sediment on the Pacific Plate is accreted frontally, the seismically resolvable structures within 30 km of the deformation front accommodate about 6 km of horizontal shortening. At least 80% of this shortening has occurred within the last 0.4±0.1 m.y. at an average rate of 12±3 mm/yr. This rate indicates that the frontal 30 km of the wedge accounts for about 33-55% of the predicted orthogonal contraction across the entire plate boundary zone. Despite plate convergence obliquity of 50°, rapid frontal accretion has occurred during the late Quaternary with the principal deformation front migrating seaward up to 50 km within the last 0.5 m.y. (i.e., at a rate of 100 km/m.y.). The structural response to this accretion rate has been a reduction in wedge taper and, consequently, internal deformation behind the present deformation front. Near the southwestern termination of the wedge, where there is an along-the-margin transition to continental transpressional tectonics, the convergence obliquity increases to >56°, and the orthogonal convergence rate decreases to 22 mm/yr, the wedge narrows to 13 km and is characterized simply by two frontal backthrusts and landward-verging folds. These structures have accommodated not more than 0.5 km of horizontal shortening at a rate of < 1 mm/yr, which represents < 5% of the predicted orthogonal shortening across the entire plate boundary in southern North Island. The landward-vergent structural domain may represent a transition zone from rapid frontal accretion associated with low basal friction and high pore pressure ratio in the central part of the margin, to the northern South Island region where the upper and lower plates are locked or at least very strongly coupled.
Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, Eli
1987-01-01
An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.
An improved VSS NLMS algorithm for active noise cancellation
NASA Astrophysics Data System (ADS)
Sun, Yunzhuo; Wang, Mingjiang; Han, Yufei; Zhang, Congyan
2017-08-01
In this paper, an improved variable step size NLMS algorithm is proposed. NLMS has fast convergence rate and low steady state error compared to other traditional adaptive filtering algorithm. But there is a contradiction between the convergence speed and steady state error that affect the performance of the NLMS algorithm. Now, we propose a new variable step size NLMS algorithm. It dynamically changes the step size according to current error and iteration times. The proposed algorithm has simple formulation and easily setting parameters, and effectively solves the contradiction in NLMS. The simulation results show that the proposed algorithm has a good tracking ability, fast convergence rate and low steady state error simultaneously.
Application of an enhanced fuzzy algorithm for MR brain tumor image segmentation
NASA Astrophysics Data System (ADS)
Hemanth, D. Jude; Vijila, C. Kezi Selva; Anitha, J.
2010-02-01
Image segmentation is one of the significant digital image processing techniques commonly used in the medical field. One of the specific applications is tumor detection in abnormal Magnetic Resonance (MR) brain images. Fuzzy approaches are widely preferred for tumor segmentation which generally yields superior results in terms of accuracy. But most of the fuzzy algorithms suffer from the drawback of slow convergence rate which makes the system practically non-feasible. In this work, the application of modified Fuzzy C-means (FCM) algorithm to tackle the convergence problem is explored in the context of brain image segmentation. This modified FCM algorithm employs the concept of quantization to improve the convergence rate besides yielding excellent segmentation efficiency. This algorithm is experimented on real time abnormal MR brain images collected from the radiologists. A comprehensive feature vector is extracted from these images and used for the segmentation technique. An extensive feature selection process is performed which reduces the convergence time period and improve the segmentation efficiency. After segmentation, the tumor portion is extracted from the segmented image. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures. Thus, this work highlights the application of the modified algorithm for brain tumor detection in abnormal MR brain images.
Non-exponential kinetics of unfolding under a constant force.
Bell, Samuel; Terentjev, Eugene M
2016-11-14
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Non-exponential kinetics of unfolding under a constant force
NASA Astrophysics Data System (ADS)
Bell, Samuel; Terentjev, Eugene M.
2016-11-01
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
NASA Astrophysics Data System (ADS)
Zaigham Zia, Q. M.; Ullah, Ikram; Waqas, M.; Alsaedi, A.; Hayat, T.
2018-03-01
This research intends to elaborate Soret-Dufour characteristics in mixed convective radiated Casson liquid flow by exponentially heated surface. Novel features of exponential space dependent heat source are introduced. Appropriate variables are implemented for conversion of partial differential frameworks into a sets of ordinary differential expressions. Homotopic scheme is employed for construction of analytic solutions. Behavior of various embedding variables on velocity, temperature and concentration distributions are plotted graphically and analyzed in detail. Besides, skin friction coefficients and heat and mass transfer rates are also computed and interpreted. The results signify the pronounced characteristics of temperature corresponding to convective and radiation variables. Concentration bears opposite response for Soret and Dufour variables.
NASA Technical Reports Server (NTRS)
Raj, S. V.; Pharr, G. M.
1989-01-01
Creep tests conducted on NaCl single crystals in the temperature range from 373 to 1023 K show that true steady state creep is obtained only above 873 K when the ratio of the applied stress to the shear modulus is less than or equal to 0.0001. Under other stress and temperature conditions, corresponding to both power law and exponential creep, the creep rate decreases monotonically with increasing strain. The transition from power law to exponential creep is shown to be associated with increases in the dislocation density, the cell boundary width, and the aspect ratio of the subgrains along the primary slip planes. The relation between dislocation structure and creep behavior is also assessed.
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-05-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
NASA Astrophysics Data System (ADS)
Gireesha, B. J.; Kumar, P. B. Sampath; Mahanthesh, B.; Shehzad, S. A.; Abbasi, F. M.
2018-02-01
The nonlinear convective flow of kerosene-Alumina nanoliquid subjected to an exponential space dependent heat source and temperature dependent viscosity is investigated here. This study is focuses on augmentation of heat transport rate in liquid propellant rocket engine. The kerosene-Alumina nanoliquid is considered as the regenerative coolant. Aspects of radiation and viscous dissipation are also covered. Relevant nonlinear system is solved numerically via RK based shooting scheme. Diverse flow fields are computed and examined for distinct governing variables. We figured out that the nanoliquid's temperature increased due to space dependent heat source and radiation aspects. The heat transfer rate is higher in case of changeable viscosity than constant viscosity.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
Constant growth rate can be supported by decreasing energy flux and increasing aerobic glycolysis.
Slavov, Nikolai; Budnik, Bogdan A; Schwab, David; Airoldi, Edoardo M; van Oudenaarden, Alexander
2014-05-08
Fermenting glucose in the presence of enough oxygen to support respiration, known as aerobic glycolysis, is believed to maximize growth rate. We observed increasing aerobic glycolysis during exponential growth, suggesting additional physiological roles for aerobic glycolysis. We investigated such roles in yeast batch cultures by quantifying O2 consumption, CO2 production, amino acids, mRNAs, proteins, posttranslational modifications, and stress sensitivity in the course of nine doublings at constant rate. During this course, the cells support a constant biomass-production rate with decreasing rates of respiration and ATP production but also decrease their stress resistance. As the respiration rate decreases, so do the levels of enzymes catalyzing rate-determining reactions of the tricarboxylic-acid cycle (providing NADH for respiration) and of mitochondrial folate-mediated NADPH production (required for oxidative defense). The findings demonstrate that exponential growth can represent not a single metabolic/physiological state but a continuum of changing states and that aerobic glycolysis can reduce the energy demands associated with respiratory metabolism and stress survival. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
On the Convergence of an Implicitly Restarted Arnoldi Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, Richard B.
We show that Sorensen's [35] implicitly restarted Arnoldi method (including its block extension) is simultaneous iteration with an implicit projection step to accelerate convergence to the invariant subspace of interest. By using the geometric convergence theory for simultaneous iteration due to Watkins and Elsner [43], we prove that an implicitly restarted Arnoldi method can achieve a super-linear rate of convergence to the dominant invariant subspace of a matrix. Moreover, we show how an IRAM computes a nested sequence of approximations for the partial Schur decomposition associated with the dominant invariant subspace of a matrix.
Control of Growth Rate by Initial Substrate Concentration at Values Below Maximum Rate
Gaudy, Anthony F.; Obayashi, Alan; Gaudy, Elizabeth T.
1971-01-01
The hyperbolic relationship between specific growth rate, μ, and substrate concentration, proposed by Monod and used since as the basis for the theory of steady-state growth in continuous-flow systems, was tested experimentally in batch cultures. Use of a Flavobacterium sp. exhibiting a high saturation constant for growth in glucose minimal medium allowed direct measurement of growth rate and substrate concentration throughout the growth cycle in medium containing a rate-limiting initial concentration of glucose. Specific growth rates were also measured for a wide range of initial glucose concentrations. A plot of specific growth rate versus initial substrate concentration was found to fit the hyperbolic equation. However, the instantaneous relationship between specific growth rate and substrate concentration during growth, which is stated by the equation, was not observed. Well defined exponential growth phases were developed at initial substrate concentrations below that required for support of the maximum exponential growth rate and a constant doubling time was maintained until 50% of the substrate had been used. It is suggested that the external substrate concentration initially present “sets” the specific growth rate by establishing a steady-state internal concentration of substrate, possibly through control of the number of permeation sites. PMID:5137579
Li, Xiao-Jian; Yang, Guang-Hong
2018-01-01
This paper is concerned with the adaptive decentralized fault-tolerant tracking control problem for a class of uncertain interconnected nonlinear systems with unknown strong interconnections. An algebraic graph theory result is introduced to address the considered interconnections. In addition, to achieve the desirable tracking performance, a neural-network-based robust adaptive decentralized fault-tolerant control (FTC) scheme is given to compensate the actuator faults and system uncertainties. Furthermore, via the Lyapunov analysis method, it is proven that all the signals of the resulting closed-loop system are semiglobally bounded, and the tracking errors of each subsystem exponentially converge to a compact set, whose radius is adjustable by choosing different controller design parameters. Finally, the effectiveness and advantages of the proposed FTC approach are illustrated with two simulated examples.
Dissociative recombination of O2(+), NO(+) and N2(+)
NASA Technical Reports Server (NTRS)
Guberman, S. L.
1983-01-01
A new L(2) approach for the calculation of the threshold molecular capture width needed for the determination of DR cross sections was developed. The widths are calculated with Fermi's golden rule by substituting Rydberg orbitals for the free electron continuum coulomb orbital. It is shown that the calculated width converges exponentially as the effective principal quantum number of the Rydberg orbital increases. The threshold capture width is then easily obtained. Since atmospheric recombination involves very low energy electrons, the threshold capture widths are essential to the calculation of DR cross sections for the atmospheric species studied here. The approach described makes use of bound state computer codes already in use. A program that collects width matrix elements over CI wavefunctions for the initial and final states is described.
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Modified Newton-Raphson GRAPE methods for optimal control of spin systems
NASA Astrophysics Data System (ADS)
Goodwin, D. L.; Kuprov, Ilya
2016-05-01
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.
A fast reconstruction algorithm for fluorescence optical diffusion tomography based on preiteration.
Song, Xiaolei; Xiong, Xiaoyun; Bai, Jing
2007-01-01
Fluorescence optical diffusion tomography in the near-infrared (NIR) bandwidth is considered to be one of the most promising ways for noninvasive molecular-based imaging. Many reconstructive approaches to it utilize iterative methods for data inversion. However, they are time-consuming and they are far from meeting the real-time imaging demands. In this work, a fast preiteration algorithm based on the generalized inverse matrix is proposed. This method needs only one step of matrix-vector multiplication online, by pushing the iteration process to be executed offline. In the preiteration process, the second-order iterative format is employed to exponentially accelerate the convergence. Simulations based on an analytical diffusion model show that the distribution of fluorescent yield can be well estimated by this algorithm and the reconstructed speed is remarkably increased.
An algorithm for testing the efficient market hypothesis.
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH).
An Algorithm for Testing the Efficient Market Hypothesis
Boboc, Ioana-Andreea; Dinică, Mihai-Cristian
2013-01-01
The objective of this research is to examine the efficiency of EUR/USD market through the application of a trading system. The system uses a genetic algorithm based on technical analysis indicators such as Exponential Moving Average (EMA), Moving Average Convergence Divergence (MACD), Relative Strength Index (RSI) and Filter that gives buying and selling recommendations to investors. The algorithm optimizes the strategies by dynamically searching for parameters that improve profitability in the training period. The best sets of rules are then applied on the testing period. The results show inconsistency in finding a set of trading rules that performs well in both periods. Strategies that achieve very good returns in the training period show difficulty in returning positive results in the testing period, this being consistent with the efficient market hypothesis (EMH). PMID:24205148
NASA Technical Reports Server (NTRS)
Smith, R. C.; Bowers, K. L.
1991-01-01
A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.
A new formation control of multiple underactuated surface vessels
NASA Astrophysics Data System (ADS)
Xie, Wenjing; Ma, Baoli; Fernando, Tyrone; Iu, Herbert Ho-Ching
2018-05-01
This work investigates a new formation control problem of multiple underactuated surface vessels. The controller design is based on input-output linearisation technique, graph theory, consensus idea and some nonlinear tools. The proposed smooth time-varying distributed control law guarantees that the multiple underactuated surface vessels globally exponentially converge to some desired geometric shape, which is especially centred at the initial average position of vessels. Furthermore, the stability analysis of zero dynamics proves that the orientations of vessels tend to some constants that are dependent on the initial values of vessels, and the velocities and control inputs of the vessels decay to zero. All the results are obtained under the communication scenarios of static directed balanced graph with a spanning tree. Effectiveness of the proposed distributed control scheme is demonstrated using a simulation example.
Tang, Hong; Ruan, Chengjie; Qiu, Tianshuang; Park, Yongwan; Xiao, Shouzhong
2013-08-01
The relationships between the amplitude of the first heart sound (S1) and the rising rate of left ventricular pressure (LVP) concluded in previous studies were not consistent. Some researchers believed the relationship was positively linear; others stated the relationship was only positively correlated. To further investigate this relationship, this study simultaneously sampled the external phonocardiogram, electrocardiogram, and intracardiac pressure in the left ventricle in three anesthetized dogs, while invoking wide hemodynamic changes using various doses of epinephrine. The relationship between the maximum amplitude of S1 and the maximum rising rate of LVP and the relationship between the amplitude of dominant peaks/valleys and the corresponding rising rate of LVP were examined by linear, quadratic, cubic, and exponential models. The results showed that the relationships are best fit by nonlinear exponential models.
Decomposition rates for hand-piled fuels
Clinton S. Wright; Alexander M. Evans; Joseph C. Restaino
2017-01-01
Hand-constructed piles in eastern Washington and north-central New Mexico were weighed periodically between October 2011 and June 2015 to develop decay-rate constants that are useful for estimating the rate of piled biomass loss over time. Decay-rate constants (k) were determined by fitting negative exponential curves to time series of pile weight for each site. Piles...
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Molecular evolution and phylodynamics of hepatitis B virus infection circulating in Iran.
Mozhgani, Sayed-Hamidreza; Malekpour, Seyed Amir; Norouzi, Mehdi; Ramezani, Fatemeh; Rezaee, Seyed Abdolrahim; Poortahmasebi, Vahdat; Sadeghi, Mehdi; Alavian, Seyed Moayed; Zarei-Ghobadi, Mohadeseh; Ghaziasadi, Azam; Karimzadeh, Hadi; Malekzadeh, Reza; Ziaee, Masood; Abedi, Farshid; Ataei, Behrooz; Yaran, Majid; Sayad, Babak; Jahantigh, Hamid Reza; Somi, Mohammad Hossein; Sarizadeh, Gholamreza; Sanei-Moghaddam, Ismail; Mansour-Ghanaei, Fariborz; Keyvani, Hossein; Kalantari, Ebrahim; Fakhari, Zahra; Geravand, Babak; Jazayeri, Seyed Mohammad
2018-06-01
Previous local and national Iranian publications indicate that all Iranian hepatitis B virus (HBV) strains belong to HBV genotype D. The aim of this study was to analyze the evolutionary history of HBV infection in Iran for the first time, based on an intensive phylodynamic study. The evolutionary parameters, time to most recent common ancestor (tMRCA), and the population dynamics of infections were investigated using the Bayesian Monte Carlo Markov chain (BMCMC). The effective sample size (ESS) and sampling convergence were then monitored. After sampling from the posterior distribution of the nucleotide substitution rate and other evolutionary parameters, the point estimations (median) of these parameters were obtained. All Iranian HBV isolates were of genotype D, sub-type ayw2. The origin of HBV is regarded as having evolved first on the eastern border, before moving westward, where Isfahan province then hosted the virus. Afterwards, the virus moved to the south and west of the country. The tMRCA of HBV in Iran was estimated to be around 1894, with a 95% credible interval between the years 1701 and 1957. The effective number of infections increased exponentially from around 1925 to 1960. Conversely, from around 1992 onwards, the effective number of HBV infections has decreased at a very high rate. Phylodynamic inference clearly demonstrates a unique homogenous pattern of HBV genotype D compatible with a steady configuration of the decreased effective number of infections in the population in recent years, possibly due to the implementation of blood donation screening and vaccination programs. Adequate molecular epidemiology databases for HBV are crucial for infection prevention and treatment programs.
NASA Astrophysics Data System (ADS)
Buxton, T. H.
2015-12-01
Salmon spawning in streams involves the female salmon digging a pit in the bed where she deposits eggs for fertilization before covering them with gravel excavated from the next pit upstream. Sequences of pit excavation and filling winnow fines, loosen sediment, and move bed material into a tailspill mound resembling the shape of a dune. Research suggests salmonid nests (redds) destabilize streambeds by reducing friction between loosened grains and converging flow that elevates shear stress on redd topography. However, bed stability may be enhanced by form drag from redds in clusters that lower shear stress on the granular bed, but this effect will vary with the proportion of the bed surface that is occupied by redds (P). I used simulated redds and water-worked ("unspawned") beds in a laboratory flume to evaluate these competing influences on grain stability and bedload transport rates with P=0.12, 0.34, and 0.41. Results indicate that competence (largest-grain) and reference transport rate estimates of critical conditions for particle entrainment inversely relate to P. Bedload transport increased as exponential functions of P and excess boundary shear stress. Therefore, redd form drag did not overcome the destabilizing effects of spawning. Instead, grain mobility and bedload transport increased with P because larger areas of the bed were composed of relatively loose, unstable grains and redd topography that experienced elevated shear stress. Consequently, the presence of redds in fish-bearing streams likely reduces the effects of sedimentation from landscape disturbance on stream habitats that salmon use for reproduction.
Fourier analysis of the SOR iteration
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Trefethen, L. N.
1986-01-01
The SOR iteration for solving linear systems of equations depends upon an overrelaxation factor omega. It is shown that for the standard model problem of Poisson's equation on a rectangle, the optimal omega and corresponding convergence rate can be rigorously obtained by Fourier analysis. The trick is to tilt the space-time grid so that the SOR stencil becomes symmetrical. The tilted grid also gives insight into the relation between convergence rates of several variants.
Dudas, Robert A; Colbert, Jorie M; Goldstein, Seth; Barone, Michael A
2012-01-01
Medical knowledge is one of six core competencies in medicine. Medical student assessments should be valid and reliable. We assessed the relationship between faculty and resident global assessment of pediatric medical student knowledge and performance on a standardized test in medical knowledge. Retrospective cross-sectional study of medical students on a pediatric clerkship in academic year 2008-2009 at one academic health center. Faculty and residents rated students' clinical knowledge on a 5-point Likert scale. The inter-rater reliability of clinical knowledge ratings was assessed by calculating the intra-class correlation coefficient (ICC) for residents' ratings, faculty ratings, and both rating types combined. Convergent validity between clinical knowledge ratings and scores on the National Board of Medical Examiners (NBME) clinical subject examination in pediatrics was assessed with Pearson product moment correlation correction and the coefficient of the determination. There was moderate agreement for global clinical knowledge ratings by faculty and moderate agreement for ratings by residents. The agreement was also moderate when faculty and resident ratings were combined. Global ratings of clinical knowledge had high convergent validity with pediatric examination scores when students were rated by both residents and faculty. Our findings provide evidence for convergent validity of global assessment of medical students' clinical knowledge with NBME subject examination scores in pediatrics. Copyright © 2012 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
Gerencsér, Máté; Jentzen, Arnulf; Salimova, Diyora
2017-11-01
In a recent article (Jentzen et al. 2016 Commun. Math. Sci. 14 , 1477-1500 (doi:10.4310/CMS.2016.v14.n6.a1)), it has been established that, for every arbitrarily slow convergence speed and every natural number d ∈{4,5,…}, there exist d -dimensional stochastic differential equations with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper, we strengthen the above result by proving that this slow convergence phenomenon also arises in two ( d =2) and three ( d =3) space dimensions.
Pinkel, D.
1987-11-30
An obstruction across the flow chamber creates a one-dimensional convergence of a sheath fluid. A passageway in the obstruction directs flat cells near to the area of one-dimensional convergence in the sheath fluid to provide proper orientation of flat cells at fast rates. 6 figs.
Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes
NASA Technical Reports Server (NTRS)
Movriplis, Dimitri J.
1998-01-01
Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.
Nubia-Arabia-Eurasia plate motions and the dynamics of Mediterranean and Middle East tectonics
NASA Astrophysics Data System (ADS)
Reilinger, Robert; McClusky, Simon
2011-09-01
We use geodetic and plate tectonic observations to constrain the tectonic evolution of the Nubia-Arabia-Eurasia plate system. Two phases of slowing of Nubia-Eurasia convergence, each of which resulted in an ˜50 per cent decrease in the rate of convergence, coincided with the initiation of Nubia-Arabia continental rifting along the Red Sea and Somalia-Arabia rifting along the Gulf of Aden at 24 ± 4 Ma, and the initiation of oceanic rifting along the full extent of the Gulf of Aden at 11 ± 2 Ma. In addition, both the northern and southern Red Sea (Nubia-Arabia plate boundary) underwent changes in the configuration of extension at 11 ± 2 Ma, including the transfer of extension from the Suez Rift to the Gulf of Aqaba/Dead Sea fault system in the north, and from the central Red Sea Basin (Bab al Mandab) to the Afar volcanic zone in the south. While Nubia-Eurasia convergence slowed, the rate of Arabia-Eurasia convergence remained constant within the resolution of our observations, and is indistinguishable from the present-day global positioning system rate. The timing of the initial slowing of Nubia-Eurasia convergence (24 ± 4 Ma) corresponds to the initiation of extensional tectonics in the Mediterranean Basin, and the second phase of slowing to changes in the character of Mediterranean extension reported at ˜11 Ma. These observations are consistent with the hypothesis that changes in Nubia-Eurasia convergence, and associated Nubia-Arabia divergence, are the fundamental cause of both Mediterranean and Middle East post-Late Oligocene tectonics. We speculate about the implications of these kinematic relationships for the dynamics of Nubia-Arabia-Eurasia plate interactions, and favour the interpretation that slowing of Nubia-Eurasia convergence, and the resulting tectonic changes in the Mediterranean Basin and Middle East, resulted from a decrease in slab pull from the Arabia-subducted lithosphere across the Nubia-Arabia, evolving plate boundary.
Quaternary tectonic evolution of the Pamir-Tian Shan convergence zone, Northwest China
NASA Astrophysics Data System (ADS)
Thompson Jobe, Jessica Ann; Li, Tao; Chen, Jie; Burbank, Douglas W.; Bufe, Aaron
2017-12-01
The Pamir-Tian Shan collision zone in the western Tarim Basin, northwest China, formed from rapid and ongoing convergence in response to the Indo-Eurasian collision. The arid landscape preserves suites of fluvial terraces crossing structures active since the late Neogene that create fault and fold scarps recording Quaternary deformation. Using geologic and geomorphic mapping, differential GPS surveys of deformed terraces, and optically stimulated luminescence dating, we create a synthesis of the active structures that delineate the timing, rate, and migration of Quaternary deformation during ongoing convergence. New deformation rates on eight faults and folds, when combined with previous studies, highlight the spatial and temporal patterns of deformation within the Pamir-Tian Shan convergence zone during the Quaternary. Terraces spanning 130 to 8 ka record deformation rates between 0.1 and 5.6 mm/yr on individual structures. In the westernmost Tarim Basin, where the Pamir and Tian Shan are already juxtaposed, the fastest rates occur on actively deforming structures at the interface of the Pamir-Tian Shan orogens. Farther east, as the separation between the Pamir-Tian Shan orogens increases, the deformation has not been concentrated on a single structure, but rather has been concurrently distributed across a zone of faults and folds in the Kashi-Atushi fold-and-thrust belt and along the NE Pamir margin, where shortening rates vary on individual structures during the Quaternary. Although numerous structures accommodate the shortening and the locus of deformation shifts during the Quaternary, the total shortening across the western Tarim Basin has remained steady and approximately matches the current geodetic rate of 6-9 mm/yr.
Strain accumulation across the Prince William Sound asperity, Southcentral Alaska
NASA Astrophysics Data System (ADS)
Savage, J. C.; Svarc, J. L.; Lisowski, M.
2015-03-01
The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.
Strain accumulation across the Prince William Sound asperity, Southcentral Alaska
Savage, James C.; Svarc, Jerry L.; Lisowski, Michael
2015-01-01
The surface velocities predicted by the conventional subduction model are compared to velocities measured in a GPS array (surveyed in 1993, 1995, 1997, 2000, and 2004) spanning the Prince William Sound asperity. The observed velocities in the comparison have been corrected to remove the contributions from postseismic (1964 Alaska earthquake) mantle relaxation. Except at the most seaward monument (located on Middleton Island at the seaward edge of the continental shelf, just 50 km landward of the deformation front in the Aleutian Trench), the corrected velocities qualitatively agree with those predicted by an improved, two-dimensional, back slip, subduction model in which the locked megathrust coincides with the plate interface identified by seismic refraction surveys, and the back slip rate is equal to the plate convergence rate. A better fit to the corrected velocities is furnished by either a back slip rate 20% greater than the plate convergence rate or a 30% shallower megathrust. The shallow megathrust in the latter fit may be an artifact of the uniform half-space Earth model used in the inversion. Backslip at the plate convergence rate on the megathrust mapped by refraction surveys would fit the data as well if the rigidity of the underthrust plate was twice that of the overlying plate, a rigidity contrast higher than expected. The anomalous motion at Middleton Island is attributed to continuous slip at near the plate convergence rate on a postulated, listric fault that splays off the megathrust at depth of about 12 km and outcrops on the continental slope south-southeast of Middleton Island.
Convergence behavior of the random phase approximation renormalized correlation energy
NASA Astrophysics Data System (ADS)
Bates, Jefferson E.; Sensenig, Jonathon; Ruzsinszky, Adrienn
2017-05-01
Based on the random phase approximation (RPA), RPA renormalization [J. E. Bates and F. Furche, J. Chem. Phys. 139, 171103 (2013), 10.1063/1.4827254] is a robust many-body perturbation theory that works for molecules and materials because it does not diverge as the Kohn-Sham gap approaches zero. Additionally, RPA renormalization enables the simultaneous calculation of RPA and beyond-RPA correlation energies since the total correlation energy is the sum of a series of independent contributions. The first-order approximation (RPAr1) yields the dominant beyond-RPA contribution to the correlation energy for a given exchange-correlation kernel, but systematically underestimates the total beyond-RPA correction. For both the homogeneous electron gas model and real systems, we demonstrate numerically that RPA renormalization beyond first order converges monotonically to the infinite-order beyond-RPA correlation energy for several model exchange-correlation kernels and that the rate of convergence is principally determined by the choice of the kernel and spin polarization of the ground state. The monotonic convergence is rationalized from an analysis of the RPA renormalized correlation energy corrections, assuming the exchange-correlation kernel and response functions satisfy some reasonable conditions. For spin-unpolarized atoms, molecules, and bulk solids, we find that RPA renormalization is typically converged to 1 meV error or less by fourth order regardless of the band gap or dimensionality. Most spin-polarized systems converge at a slightly slower rate, with errors on the order of 10 meV at fourth order and typically requiring up to sixth order to reach 1 meV error or less. Slowest to converge, however, open-shell atoms present the most challenging case and require many higher orders to converge.
Estimates of projection overlap and zones of convergence within frontal-striatal circuits.
Averbeck, Bruno B; Lehman, Julia; Jacobson, Moriah; Haber, Suzanne N
2014-07-16
Frontal-striatal circuits underlie important decision processes, and pathology in these circuits is implicated in many psychiatric disorders. Studies have shown a topographic organization of cortical projections into the striatum. However, work has also shown that there is considerable overlap in the striatal projection zones of nearby cortical regions. To characterize this in detail, we quantified the complete striatal projection zones from 34 cortical injection locations in rhesus monkeys. We first fit a statistical model that showed that the projection zone of a cortical injection site could be predicted with considerable accuracy using a cross-validated model estimated on only the other injection sites. We then examined the fraction of overlap in striatal projection zones as a function of distance between cortical injection sites, and found that there was a highly regular relationship. Specifically, nearby cortical locations had as much as 80% overlap, and the amount of overlap decayed exponentially as a function of distance between the cortical injection sites. Finally, we found that some portions of the striatum received inputs from all the prefrontal regions, making these striatal zones candidates as information-processing hubs. Thus, the striatum is a site of convergence that allows integration of information spread across diverse prefrontal cortical areas. Copyright © 2014 the authors 0270-6474/14/339497-09$15.00/0.
Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory
NASA Astrophysics Data System (ADS)
Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick
2018-05-01
For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.
Large-scale exact diagonalizations reveal low-momentum scales of nuclei
NASA Astrophysics Data System (ADS)
Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.
2018-03-01
Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.
NASA Astrophysics Data System (ADS)
Ganguly, S.; Lubetzky, E.; Martinelli, F.
2015-05-01
The East process is a 1 d kinetically constrained interacting particle system, introduced in the physics literature in the early 1990s to model liquid-glass transitions. Spectral gap estimates of Aldous and Diaconis in 2002 imply that its mixing time on L sites has order L. We complement that result and show cutoff with an -window. The main ingredient is an analysis of the front of the process (its rightmost zero in the setup where zeros facilitate updates to their right). One expects the front to advance as a biased random walk, whose normal fluctuations would imply cutoff with an -window. The law of the process behind the front plays a crucial role: Blondel showed that it converges to an invariant measure ν, on which very little is known. Here we obtain quantitative bounds on the speed of convergence to ν, finding that it is exponentially fast. We then derive that the increments of the front behave as a stationary mixing sequence of random variables, and a Stein-method based argument of Bolthausen (`82) implies a CLT for the location of the front, yielding the cutoff result. Finally, we supplement these results by a study of analogous kinetically constrained models on trees, again establishing cutoff, yet this time with an O(1)-window.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
NASA Technical Reports Server (NTRS)
Macfarlane, J. J.
1992-01-01
We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
Shock tube measurements of specific reaction rates in branched chain CH4-CO-O2 system
NASA Technical Reports Server (NTRS)
Brabbs, T. A.; Brokaw, R. S.
1974-01-01
Rate constants of two elementary bimolecular reactions involved in the oxidation of methane were determined by monitoring the exponential growth of CO flame band emission behind incident shocks in three suitably chosen gas mixtures.
A hybrid MD-kMC algorithm for folding proteins in explicit solvent.
Peter, Emanuel Karl; Shea, Joan-Emma
2014-04-14
We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.
Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick
2012-01-01
Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
Convergence among cave catfishes: long-branch attraction and a Bayesian relative rates test.
Wilcox, T P; García de León, F J; Hendrickson, D A; Hillis, D M
2004-06-01
Convergence has long been of interest to evolutionary biologists. Cave organisms appear to be ideal candidates for studying convergence in morphological, physiological, and developmental traits. Here we report apparent convergence in two cave-catfishes that were described on morphological grounds as congeners: Prietella phreatophila and Prietella lundbergi. We collected mitochondrial DNA sequence data from 10 species of catfishes, representing five of the seven genera in Ictaluridae, as well as seven species from a broad range of siluriform outgroups. Analysis of the sequence data under parsimony supports a monophyletic Prietella. However, both maximum-likelihood and Bayesian analyses support polyphyly of the genus, with P. lundbergi sister to Ictalurus and P. phreatophila sister to Ameiurus. The topological difference between parsimony and the other methods appears to result from long-branch attraction between the Prietella species. Similarly, the sequence data do not support several other relationships within Ictaluridae supported by morphology. We develop a new Bayesian method for examining variation in molecular rates of evolution across a phylogeny.
Active Control of Wind Tunnel Noise
NASA Technical Reports Server (NTRS)
Hollis, Patrick (Principal Investigator)
1991-01-01
The need for an adaptive active control system was realized, since a wind tunnel is subjected to variations in air velocity, temperature, air turbulence, and some other factors such as nonlinearity. Among many adaptive algorithms, the Least Mean Squares (LMS) algorithm, which is the simplest one, has been used in an Active Noise Control (ANC) system by some researchers. However, Eriksson's results, Eriksson (1985), showed instability in the ANC system with an ER filter for random noise input. The Restricted Least Squares (RLS) algorithm, although computationally more complex than the LMS algorithm, has better convergence and stability properties. The ANC system in the present work was simulated by using an FIR filter with an RLS algorithm for different inputs and for a number of plant models. Simulation results for the ANC system with acoustic feedback showed better robustness when used with the RLS algorithm than with the LMS algorithm for all types of inputs. Overall attenuation in the frequency domain was better in the case of the RLS adaptive algorithm. Simulation results with a more realistic plant model and an RLS adaptive algorithm showed a slower convergence rate than the case with an acoustic plant as a delay plant. However, the attenuation properties were satisfactory for the simulated system with the modified plant. The effect of filter length on the rate of convergence and attenuation was studied. It was found that the rate of convergence decreases with increase in filter length, whereas the attenuation increases with increase in filter length. The final design of the ANC system was simulated and found to have a reasonable convergence rate and good attenuation properties for an input containing discrete frequencies and random noise.
Escape rate for nonequilibrium processes dominated by strong non-detailed balance force
NASA Astrophysics Data System (ADS)
Tang, Ying; Xu, Song; Ao, Ping
2018-02-01
Quantifying the escape rate from a meta-stable state is essential to understand a wide range of dynamical processes. Kramers' classical rate formula is the product of an exponential function of the potential barrier height and a pre-factor related to the friction coefficient. Although many applications of the rate formula focused on the exponential term, the prefactor can have a significant effect on the escape rate in certain parameter regions, such as the overdamped limit and the underdamped limit. There have been continuous interests to understand the effect of non-detailed balance on the escape rate; however, how the prefactor behaves under strong non-detailed balance force remains elusive. In this work, we find that the escape rate formula has a vanishing prefactor with decreasing friction strength under the strong non-detailed balance limit. We both obtain analytical solutions in specific examples and provide a derivation for more general cases. We further verify the result by simulations and propose a testable experimental system of a charged Brownian particle in electromagnetic field. Our study demonstrates that a special care is required to estimate the effect of prefactor on the escape rate when non-detailed balance force dominates.
Nonlinear analogue of the May−Wigner instability transition
Fyodorov, Yan V.; Khoruzhenko, Boris A.
2016-01-01
We study a system of N≫1 degrees of freedom coupled via a smooth homogeneous Gaussian vector field with both gradient and divergence-free components. In the absence of coupling, the system is exponentially relaxing to an equilibrium with rate μ. We show that, while increasing the ratio of the coupling strength to the relaxation rate, the system experiences an abrupt transition from a topologically trivial phase portrait with a single equilibrium into a topologically nontrivial regime characterized by an exponential number of equilibria, the vast majority of which are expected to be unstable. It is suggested that this picture provides a global view on the nature of the May−Wigner instability transition originally discovered by local linear stability analysis. PMID:27274077
The release of alginate lyase from growing Pseudomonas syringae pathovar phaseolicola
NASA Technical Reports Server (NTRS)
Ott, C. M.; Day, D. F.; Koenig, D. W.; Pierson, D. L.
2001-01-01
Pseudomonas syringae pathovar phaseolicola, which produces alginate during stationary growth phase, displayed elevated extracellular alginate lyase activity during both mid-exponential and late-stationary growth phases of batch growth. Intracellular activity remained below 22% of the total activity during exponential growth, suggesting that alginate lyase has an extracellular function for this organism. Extracellular enzyme activity in continuous cultures, grown in either nutrient broth or glucose-simple salts medium, peaked at 60% of the washout rate, although nutrient broth-grown cultures displayed more than twice the activity per gram of cell mass. These results imply that growth rate, nutritional composition, or both initiate a release of alginate lyase from viable P. syringae pv. phaseolicola, which could modify its entrapping biofilm.
Convergence of Defect-Correction and Multigrid Iterations for Inviscid Flows
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Convergence of multigrid and defect-correction iterations is comprehensively studied within different incompressible and compressible inviscid regimes on high-density grids. Good smoothing properties of the defect-correction relaxation have been shown using both a modified Fourier analysis and a more general idealized-coarse-grid analysis. Single-grid defect correction alone has some slowly converging iterations on grids of medium density. The convergence is especially slow for near-sonic flows and for very low compressible Mach numbers. Additionally, the fast asymptotic convergence seen on medium density grids deteriorates on high-density grids. Certain downstream-boundary modes are very slowly damped on high-density grids. Multigrid scheme accelerates convergence of the slow defect-correction iterations to the extent determined by the coarse-grid correction. The two-level asymptotic convergence rates are stable and significantly below one in most of the regions but slow convergence is noted for near-sonic and very low-Mach compressible flows. Multigrid solver has been applied to the NACA 0012 airfoil and to different flow regimes, such as near-tangency and stagnation. Certain convergence difficulties have been encountered within stagnation regions. Nonetheless, for the airfoil flow, with a sharp trailing-edge, residuals were fast converging for a subcritical flow on a sequence of grids. For supercritical flow, residuals converged slower on some intermediate grids than on the finest grid or the two coarsest grids.
Exponentially damped Lévy flights, multiscaling, and exchange rates
NASA Astrophysics Data System (ADS)
Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; Rathie, Pushpa; Da Silva, Sergio
2004-02-01
We employ our previously suggested exponentially damped Lévy flight (Physica A 326 (2003) 544) to study the multiscaling properties of 30 daily exchange rates against the US dollar together with a fictitious euro-dollar rate (Physica A 286 (2000) 353). Though multiscaling is not theoretically seen in either stable Lévy processes or abruptly truncated Lévy flights, it is even characteristic of smoothly truncated Lévy flights (Phys. Lett. A 266 (2000) 282; Eur. Phys. J. B 4 (1998) 143). We have already defined a class of “quasi-stable” processes in connection with the finding that single scaling is pervasive among the dollar price of foreign currencies (Physica A 323 (2003) 601). Here we show that the same goes as far as multiscaling is concerned. Our novel findings incidentally reinforce the case for real-world relevance of the Lévy flights for modeling financial prices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyrya, Vitaliy; Mourad, Hashem Mohamed
We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.
Hierarchically clustered adaptive quantization CMAC and its learning convergence.
Teddy, S D; Lai, E M K; Quek, C
2007-11-01
The cerebellar model articulation controller (CMAC) neural network (NN) is a well-established computational model of the human cerebellum. Nevertheless, there are two major drawbacks associated with the uniform quantization scheme of the CMAC network. They are the following: (1) a constant output resolution associated with the entire input space and (2) the generalization-accuracy dilemma. Moreover, the size of the CMAC network is an exponential function of the number of inputs. Depending on the characteristics of the training data, only a small percentage of the entire set of CMAC memory cells is utilized. Therefore, the efficient utilization of the CMAC memory is a crucial issue. One approach is to quantize the input space nonuniformly. For existing nonuniformly quantized CMAC systems, there is a tradeoff between memory efficiency and computational complexity. Inspired by the underlying organizational mechanism of the human brain, this paper presents a novel CMAC architecture named hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC). HCAQ-CMAC employs hierarchical clustering for the nonuniform quantization of the input space to identify significant input segments and subsequently allocating more memory cells to these regions. The stability of the HCAQ-CMAC network is theoretically guaranteed by the proof of its learning convergence. The performance of the proposed network is subsequently benchmarked against the original CMAC network, as well as two other existing CMAC variants on two real-life applications, namely, automated control of car maneuver and modeling of the human blood glucose dynamics. The experimental results have demonstrated that the HCAQ-CMAC network offers an efficient memory allocation scheme and improves the generalization and accuracy of the network output to achieve better or comparable performances with smaller memory usages. Index Terms-Cerebellar model articulation controller (CMAC), hierarchical clustering, hierarchically clustered adaptive quantization CMAC (HCAQ-CMAC), learning convergence, nonuniform quantization.
Tweedie convergence: a mathematical basis for Taylor's power law, 1/f noise, and multifractality.
Kendal, Wayne S; Jørgensen, Bent
2011-12-01
Plants and animals of a given species tend to cluster within their habitats in accordance with a power function between their mean density and the variance. This relationship, Taylor's power law, has been variously explained by ecologists in terms of animal behavior, interspecies interactions, demographic effects, etc., all without consensus. Taylor's law also manifests within a wide range of other biological and physical processes, sometimes being referred to as fluctuation scaling and attributed to effects of the second law of thermodynamics. 1/f noise refers to power spectra that have an approximately inverse dependence on frequency. Like Taylor's law these spectra manifest from a wide range of biological and physical processes, without general agreement as to cause. One contemporary paradigm for 1/f noise has been based on the physics of self-organized criticality. We show here that Taylor's law (when derived from sequential data using the method of expanding bins) implies 1/f noise, and that both phenomena can be explained by a central limit-like effect that establishes the class of Tweedie exponential dispersion models as foci for this convergence. These Tweedie models are probabilistic models characterized by closure under additive and reproductive convolution as well as under scale transformation, and consequently manifest a variance to mean power function. We provide examples of Taylor's law, 1/f noise, and multifractality within the eigenvalue deviations of the Gaussian unitary and orthogonal ensembles, and show that these deviations conform to the Tweedie compound Poisson distribution. The Tweedie convergence theorem provides a unified mathematical explanation for the origin of Taylor's law and 1/f noise applicable to a wide range of biological, physical, and mathematical processes, as well as to multifractality.
Inter-rater agreement of comorbid DSM-IV personality disorders in substance abusers.
Hesse, Morten; Thylstrup, Birgitte
2008-05-17
Little is known about the inter-rater agreement of personality disorders in clinical settings. Clinicians rated 75 patients with substance use disorders on the DSM-IV criteria of personality disorders in random order, and on rating scales representing the severity of each. Convergent validity agreement was moderate (range for r = 0.55, 0.67) for cluster B disorders rated with DSM-IV criteria, and discriminant validity was moderate for eight of the ten personality disorders. Convergent validity of the rating scales was only moderate for antisocial and narcissistic personality disorder. Dimensional ratings may be used in research studies and clinical practice with some caution, and may be collected as one of several sources of information to describe the personality of a patient.
Diffusion and Mixing in Globular Clusters
NASA Astrophysics Data System (ADS)
Meiron, Yohai; Kocsis, Bence
2018-03-01
Collisional relaxation describes the stochastic process with which a self-gravitating system near equilibrium evolves in phase-space due to the fluctuating gravitational field of the system. The characteristic timescale of this process is called the relaxation time. In this paper, we highlight the difference between two measures of the relaxation time in globular clusters: (1) the diffusion time with which the isolating integrals of motion (i.e., energy E and angular momentum magnitude L) of individual stars change stochastically and (2) the asymptotic timescale required for a family of orbits to mix in the cluster. More specifically, the former corresponds to the instantaneous rate of change of a star’s E or L, while the latter corresponds to the timescale for the stars to statistically forget their initial conditions. We show that the diffusion timescales of E and L vary systematically around the commonly used half-mass relaxation time in different regions of the cluster by a factor of ∼10 and ∼100, respectively, for more than 20% of the stars. We define the mixedness of an orbital family at any given time as the correlation coefficient between its E or L probability distribution functions and those of the whole cluster. Using Monte Carlo simulations, we find that mixedness converges asymptotically exponentially with a decay timescale that is ∼10 times the half-mass relaxation time.
NASA Astrophysics Data System (ADS)
Herda, Maxime; Rodrigues, L. Miguel
2018-03-01
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
Second-order variational equations for N-body simulations
NASA Astrophysics Data System (ADS)
Rein, Hanno; Tamayo, Daniel
2016-07-01
First-order variational equations are widely used in N-body simulations to study how nearby trajectories diverge from one another. These allow for efficient and reliable determinations of chaos indicators such as the Maximal Lyapunov characteristic Exponent (MLE) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). In this paper we lay out the theoretical framework to extend the idea of variational equations to higher order. We explicitly derive the differential equations that govern the evolution of second-order variations in the N-body problem. Going to second order opens the door to new applications, including optimization algorithms that require the first and second derivatives of the solution, like the classical Newton's method. Typically, these methods have faster convergence rates than derivative-free methods. Derivatives are also required for Riemann manifold Langevin and Hamiltonian Monte Carlo methods which provide significantly shorter correlation times than standard methods. Such improved optimization methods can be applied to anything from radial-velocity/transit-timing-variation fitting to spacecraft trajectory optimization to asteroid deflection. We provide an implementation of first- and second-order variational equations for the publicly available REBOUND integrator package. Our implementation allows the simultaneous integration of any number of first- and second-order variational equations with the high-accuracy IAS15 integrator. We also provide routines to generate consistent and accurate initial conditions without the need for finite differencing.
Cai, Qing-Bo; Xu, Xiao-Wei; Zhou, Guorong
2017-01-01
In this paper, we construct a bivariate tensor product generalization of Kantorovich-type Bernstein-Stancu-Schurer operators based on the concept of [Formula: see text]-integers. We obtain moments and central moments of these operators, give the rate of convergence by using the complete modulus of continuity for the bivariate case and estimate a convergence theorem for the Lipschitz continuous functions. We also give some graphs and numerical examples to illustrate the convergence properties of these operators to certain functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, E.W.
A class of Projected Discrete-Ordinates (PDO) methods is described for obtaining iterative solutions of discrete-ordinates problems with convergence rates comparable to those observed using Diffusion Synthetic Acceleration (DSA). The spatially discretized PDO solutions are generally not equal to the DSA solutions, but unlike DSA, which requires great care in the use of spatial discretizations to preserve stability, the PDO solutions remain stable and rapidly convergent with essentially arbitrary spatial discretizations. Numerical results are presented which illustrate the rapid convergence and the accuracy of solutions obtained using PDO methods with commonplace differencing methods.
Convergence of strain energy release rate components for edge-delaminated composite laminates
NASA Technical Reports Server (NTRS)
Raju, I. S.; Crews, J. H., Jr.; Aminpour, M. A.
1987-01-01
Strain energy release rates for edge delaminated composite laminates were obtained using quasi 3 dimensional finite element analysis. The problem of edge delamination at the -35/90 interfaces of an 8-ply composite laminate subjected to uniform axial strain was studied. The individual components of the strain energy release rates did not show convergence as the delamination tip elements were made smaller. In contrast, the total strain energy release rate converged and remained unchanged as the delamination tip elements were made smaller and agreed with that calculated using a classical laminated plate theory. The studies of the near field solutions for a delamination at an interface between two dissimilar isotropic or orthotropic plates showed that the imaginary part of the singularity is the cause of the nonconvergent behavior of the individual components. To evaluate the accuracy of the results, an 8-ply laminate with the delamination modeled in a thin resin layer, that exists between the -35 and 90 plies, was analyzed. Because the delamination exists in a homogeneous isotropic material, the oscillatory component of the singularity vanishes.
Reynolds and Prandtl number scaling of viscous heating in isotropic turbulence
NASA Astrophysics Data System (ADS)
Pushkarev, Andrey; Balarac, Guillaume; Bos, Wouter J. T.
2017-08-01
Viscous heating is investigated using high-resolution direct numerical simulations. Scaling relations are derived and verified for different values of the Reynolds and Prandtl numbers. The scaling of the heat fluctuations is shown to depend on Lagrangian correlation times and on the scaling of dissipation-rate fluctuations. The convergence of the temperature spectrum to asymptotic scaling is observed to be slow, due to the broadband character of the temperature production spectrum and the slow convergence of the dissipation-rate spectrum to its asymptotic form.
NASA Technical Reports Server (NTRS)
Chang, Ching L.; Jiang, Bo-Nan
1990-01-01
A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.
Temperature-dependent rate models of vascular cambium cell mortality
Matthew B. Dickinson; Edward A. Johnson
2004-01-01
We use two rate-process models to describe cell mortality at elevated temperatures as a means of understanding vascular cambium cell death during surface fires. In the models, cell death is caused by irreversible damage to cellular molecules that occurs at rates that increase exponentially with temperature. The models differ in whether cells show cumulative effects of...
ERIC Educational Resources Information Center
Monahan, Carlyn J.; Muchinsky, Paul M.
1985-01-01
The degree of convergent validity among four methods of identifying vocational preferences is assessed via the decision theoretic paradigm. Vocational preferences identified by Holland's Vocational Preference Inventory (VPI), a rating procedure, and ranking were compared with preferences identified from a policy-capturing model developed from an…
The Early Development Instrument: An Examination of Convergent and Discriminant Validity
ERIC Educational Resources Information Center
Hymel, Shelley; LeMare, Lucy; McKee, William
2011-01-01
The convergent and discriminant validity of the Early Development Instrument (EDI), a teacher-rated assessment of children's "school readiness", was investigated in a multicultural sample of 267 kindergarteners (53% male). Teachers evaluations on the EDI, both overall and in five domains (physical health/well-being, social competence,…
A multilevel approach to examining cephalopod growth using Octopus pallidus as a model.
Semmens, Jayson; Doubleday, Zoë; Hoyle, Kate; Pecl, Gretta
2011-08-15
Many aspects of octopus growth dynamics are poorly understood, particularly in relation to sub-adult or adult growth, muscle fibre dynamics and repro-somatic investment. The growth of 5 month old Octopus pallidus cultured in the laboratory was investigated under three temperature regimes over a 12 week period: seasonally increasing temperatures (14-18°C); seasonally decreasing temperatures (18-14°C); and a constant temperature mid-way between seasonal peaks (16°C). Differences in somatic growth at the whole-animal level, muscle tissue structure and rate of gonad development were investigated. Continuous exponential growth was observed, both at a group and at an individual level, and there was no detectable effect of temperature on whole-animal growth rate. Juvenile growth rate (from 1 to 156 days) was also monitored prior to the controlled experiment; exponential growth was observed, but at a significantly faster rate than in the older experimental animals, suggesting that O. pallidus exhibit a double-exponential two-phase growth pattern. There was considerable variability in size-at-age even between individuals growing under identical thermal regimes. Animals exposed to seasonally decreasing temperatures exhibited a higher rate of gonad development compared with animals exposed to increasing temperatures; however, this did not coincide with a detectable decline in somatic growth rate or mantle condition. The ongoing production of new mitochondria-poor and mitochondria-rich muscle fibres (hyperplasia) was observed, indicated by a decreased or stable mean muscle fibre diameter concurrent with an increase in whole-body size. Animals from both seasonal temperature regimes demonstrated higher rates of new mitochondria-rich fibre generation relative to those from the constant temperature regime, but this difference was not reflected in a difference in growth rate at the whole-body level. This is the first study to record ongoing hyperplasia in the muscle tissue of an octopus species, and provides further insight into the complex growth dynamics of octopus.
NASA Astrophysics Data System (ADS)
Ismail, A.; Hassan, Noor I.
2013-09-01
Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.
NASA Astrophysics Data System (ADS)
Street, Rachel A.; Duckham, S. Craig; Hewitt, C. Nicholas
1996-10-01
Isoprene and monoterpene emission rates were measured from Sitka spruce (Picea sitchensis Bong.) with a dynamic flow-through branch enclosure, both in the laboratory and in the field in the United Kingdom. In the laboratory, emission rates of isoprene comprised over 94% of the identified VOC species, and were exponentially related to temperature over a period of 1 day. This exponential relationship broke down at ˜33°C. Field measurements were taken on five sampling days in 1992 and 1993, in Grizedale Forest, Cumbria. Total emission rates were in the range 36-3771 ng g-1 h-1. Relative emissions were more variable than suggested by laboratory measurements, with monoterpenes contributing at least 64% to the total emissions in most cases. There was a significant variation in the basal emission rate both across the growing season and between different ages of vegetation, the causes of which are as yet unknown. Total emission rates, in July 1993, were estimated to be between 0.01 and 0.27% of assimilated carbon.
Sleep, John; Irving, Malcolm; Burton, Kevin
2005-03-15
The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two biochemical steps with similar rate constants -- ATP hydrolysis and the release of inorganic phosphate -- both of which combine to control the rate of force development.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.
Lam, Clifford; Fan, Jianqing
2009-01-01
This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.
Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M
2013-02-01
A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.
Numerical Tests of the Cosmic Censorship Conjecture via Event-Horizon Finding
NASA Astrophysics Data System (ADS)
Okounkova, Maria; Ott, Christian; Scheel, Mark; Szilagyi, Bela
2015-04-01
We present the current state of our research on the possibility of naked singularity formation in gravitational collapse, numerically testing both the cosmic censorship conjecture and the hoop conjecture. The former of these posits that all singularities lie behind an event horizon, while the later conjectures that this is true if collapse occurs from an initial configuration with all circumferences C <= 4 πM . We reconsider the classical Shapiro & Teukolsky (1991) prolate spheroid naked singularity scenario. Using the exponentially error-convergent Spectral Einstein Code (SpEC) we simulate the collapse of collisionless matter and probe for apparent horizons. We propose a new method to probe for the existence of an event horizon by following characteristic from regions near the singularity, using methods commonly employed in Cauchy characteristic extraction. This research was partially supported by NSF under Award No. PHY-1404569.
NASA Technical Reports Server (NTRS)
Balachandar, S.; Yuen, D. A.; Reuteler, D. M.
1995-01-01
We have applied spectral-transform methods to study three-dimensional thermal convection with temperature-dependent viscosity. The viscosity varies exponentially with the form exp(-BT), where B controls the viscosity contrast and T is temperature. Solutions for high Rayleigh numbers, up to an effective Ra of 6.25 x 10(exp 6), have been obtained for an aspect-ratio of 5x5x1 and a viscosity contrast of 25. Solutions show the localization of toroidal velocity fields with increasing vigor of convection to a coherent network of shear-zones. Viscous dissipation increases with Rayleigh number and is particularly strong in regions of convergent flows and shear deformation. A time-varying depth-dependent mean-flow is generated because of the correlation between laterally varying viscosity and velocity gradients.
Use of Picard and Newton iteration for solving nonlinear ground water flow equations
Mehl, S.
2006-01-01
This study examines the use of Picard and Newton iteration to solve the nonlinear, saturated ground water flow equation. Here, a simple three-node problem is used to demonstrate the convergence difficulties that can arise when solving the nonlinear, saturated ground water flow equation in both homogeneous and heterogeneous systems with and without nonlinear boundary conditions. For these cases, the characteristic types of convergence patterns are examined. Viewing these convergence patterns as orbits of an attractor in a dynamical system provides further insight. It is shown that the nonlinearity that arises from nonlinear head-dependent boundary conditions can cause more convergence difficulties than the nonlinearity that arises from flow in an unconfined aquifer. Furthermore, the effects of damping on both convergence and convergence rate are investigated. It is shown that no single strategy is effective for all problems and how understanding pitfalls and merits of several methods can be helpful in overcoming convergence difficulties. Results show that Picard iterations can be a simple and effective method for the solution of nonlinear, saturated ground water flow problems.
Cosmic Reionization On Computers: Numerical and Physical Convergence
Gnedin, Nickolay Y.
2016-04-01
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
Cosmic Reionization On Computers: Numerical and Physical Convergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnedin, Nickolay Y.
In this paper I show that simulations of reionization performed under the Cosmic Reionization On Computers (CROC) project do converge in space and mass, albeit rather slowly. A fully converged solution (for a given star formation and feedback model) can be determined at a level of precision of about 20%, but such a solution is useless in practice, since achieving it in production-grade simulations would require a large set of runs at various mass and spatial resolutions, and computational resources for such an undertaking are not yet readily available. In order to make progress in the interim, I introduce amore » weak convergence correction factor in the star formation recipe, which allows one to approximate the fully converged solution with finite resolution simulations. The accuracy of weakly converged simulations approaches a comparable, ~20% level of precision for star formation histories of individual galactic halos and other galactic properties that are directly related to star formation rates, like stellar masses and metallicities. Yet other properties of model galaxies, for example, their HI masses, are recovered in the weakly converged runs only within a factor of two.« less
Non-exponential decoherence of radio-frequency resonance rotation of spin in storage rings
NASA Astrophysics Data System (ADS)
Saleev, A.; Nikolaev, N. N.; Rathmann, F.; Hinder, F.; Pretz, J.; Rosenthal, M.
2017-08-01
Precision experiments, such as the search for electric dipole moments of charged particles using radio-frequency spin rotators in storage rings, demand for maintaining the exact spin resonance condition for several thousand seconds. Synchrotron oscillations in the stored beam modulate the spin tune of off-central particles, moving it off the perfect resonance condition set for central particles on the reference orbit. Here, we report an analytic description of how synchrotron oscillations lead to non-exponential decoherence of the radio-frequency resonance driven up-down spin rotations. This non-exponential decoherence is shown to be accompanied by a nontrivial walk of the spin phase. We also comment on sensitivity of the decoherence rate to the harmonics of the radio-frequency spin rotator and a possibility to check predictions of decoherence-free magic energies.
Present-day uplift of the western Alps.
Nocquet, J-M; Sue, C; Walpersdorf, A; Tran, T; Lenôtre, N; Vernant, P; Cushing, M; Jouanne, F; Masson, F; Baize, S; Chéry, J; van der Beek, P A
2016-06-27
Collisional mountain belts grow as a consequence of continental plate convergence and eventually disappear under the combined effects of gravitational collapse and erosion. Using a decade of GPS data, we show that the western Alps are currently characterized by zero horizontal velocity boundary conditions, offering the opportunity to investigate orogen evolution at the time of cessation of plate convergence. We find no significant horizontal motion within the belt, but GPS and levelling measurements independently show a regional pattern of uplift reaching ~2.5 mm/yr in the northwestern Alps. Unless a low viscosity crustal root under the northwestern Alps locally enhances the vertical response to surface unloading, the summed effects of isostatic responses to erosion and glaciation explain at most 60% of the observed uplift rates. Rock-uplift rates corrected from transient glacial isostatic adjustment contributions likely exceed erosion rates in the northwestern Alps. In the absence of active convergence, the observed surface uplift must result from deep-seated processes.
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
Topics in global convergence of density estimates
NASA Technical Reports Server (NTRS)
Devroye, L.
1982-01-01
The problem of estimating a density f on R sup d from a sample Xz(1),...,X(n) of independent identically distributed random vectors is critically examined, and some recent results in the field are reviewed. The following statements are qualified: (1) For any sequence of density estimates f(n), any arbitrary slow rate of convergence to 0 is possible for E(integral/f(n)-fl); (2) In theoretical comparisons of density estimates, integral/f(n)-f/ should be used and not integral/f(n)-f/sup p, p 1; and (3) For most reasonable nonparametric density estimates, either there is convergence of integral/f(n)-f/ (and then the convergence is in the strongest possible sense for all f), or there is no convergence (even in the weakest possible sense for a single f). There is no intermediate situation.
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
The tunneling effect for a class of difference operators
NASA Astrophysics Data System (ADS)
Klein, Markus; Rosenberger, Elke
We analyze a general class of self-adjoint difference operators H𝜀 = T𝜀 + V𝜀 on ℓ2((𝜀ℤ)d), where V𝜀 is a multi-well potential and 𝜀 is a small parameter. We give a coherent review of our results on tunneling up to new sharp results on the level of complete asymptotic expansions (see [30-35]).Our emphasis is on general ideas and strategy, possibly of interest for a broader range of readers, and less on detailed mathematical proofs. The wells are decoupled by introducing certain Dirichlet operators on regions containing only one potential well. Then the eigenvalue problem for the Hamiltonian H𝜀 is treated as a small perturbation of these comparison problems. After constructing a Finslerian distance d induced by H𝜀, we show that Dirichlet eigenfunctions decay exponentially with a rate controlled by this distance to the well. It follows with microlocal techniques that the first n eigenvalues of H𝜀 converge to the first n eigenvalues of the direct sum of harmonic oscillators on ℝd located at several wells. In a neighborhood of one well, we construct formal asymptotic expansions of WKB-type for eigenfunctions associated with the low-lying eigenvalues of H𝜀. These are obtained from eigenfunctions or quasimodes for the operator H𝜀, acting on L2(ℝd), via restriction to the lattice (𝜀ℤ)d. Tunneling is then described by a certain interaction matrix, similar to the analysis for the Schrödinger operator (see [22]), the remainder is exponentially small and roughly quadratic compared with the interaction matrix. We give weighted ℓ2-estimates for the difference of eigenfunctions of Dirichlet-operators in neighborhoods of the different wells and the associated WKB-expansions at the wells. In the last step, we derive full asymptotic expansions for interactions between two “wells” (minima) of the potential energy, in particular for the discrete tunneling effect. Here we essentially use analysis on phase space, complexified in the momentum variable. These results are as sharp as the classical results for the Schrödinger operator in [22].
Forgotten Fundamentals of the Energy Crisis
ERIC Educational Resources Information Center
Bartlett, Albert A.
1978-01-01
Explains using exponential mathematics, the effect of growth on rate of consuming energy resources. Concludes that we are running out of energy resources at a greater rate than many people think. Lists few options left such as conservation by stopping growth of consumption, recycling, and research to develop alternate sources. (GA)
NASA Astrophysics Data System (ADS)
Regalla, Christine
Here we investigate the relationships between outer forearc subsidence, the timing and kinematics of upper plate deformation and plate convergence rate in Northeast Japan to evaluate the role of plate boundary dynamics in driving forearc subsidence. The Northeastern Japan margin is one of the first non-accretionary subduction zones where regional forearc subsidence was argued to reflect tectonic erosion of large volumes of upper crustal rocks. However, we propose that a significant component of forearc subsidence could be the result of dynamic changes in plate boundary geometry. We provide new constraints on the timing and kinematics of deformation along inner forearc faults, new analyses of the evolution of outer forearc tectonic subsidence, and updated calculations of plate convergence rate. These data collectively reveal a temporal correlation between the onset of regional forearc subsidence, the initiation of upper plate extension, and an acceleration in local plate convergence rate. A similar analysis of the kinematic evolution of the Tonga, Izu-Bonin, and Mariana subduction zones indicates that the temporal correlations observed in Japan are also characteristic of these three non-accretionary margins. Comparison of these data with published geodynamic models suggests that forearc subsidence is the result of temporal variability in slab geometry due to changes in slab buoyancy and plate convergence rate. These observations suggest that a significant component of forearc subsidence at these four margins is not the product of tectonic erosion, but instead reflects changes in plate boundary dynamics driven by variable plate kinematics.
A robust, finite element model for hydrostatic surface water flows
Walters, R.A.; Casulli, V.
1998-01-01
A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.A finite element scheme is introduced for the 2-dimensional shallow water equations using semi-implicit methods in time. A semi-Lagrangian method is used to approximate the effects of advection. A wave equation is formed at the discrete level such that the equations decouple into an equation for surface elevation and a momentum equation for the horizontal velocity. The convergence rates and relative computational efficiency are examined with the use of three test cases representing various degrees of difficulty. A test with a polar-quadrant grid investigates the response to local grid-scale forcing and the presence of spurious modes, a channel test case establishes convergence rates, and a field-scale test case examines problems with highly irregular grids.
Temperament Measures of African-American Infants: Change and Convergence with Age
ERIC Educational Resources Information Center
Worobey, John; Islas-Lopez, Maria
2009-01-01
Studies of infant temperament are inconsistent with regard to convergence across measurement sources. In addition, little published work is available that describes temperament in minority infants. In this study, measures of temperament at three and six months were made for 24 African-American infants. Although maternal ratings of activity and…
Distributed Sensing and Processing: A Graphical Model Approach
2005-11-30
that Ramanujan graph toplogies maximize the convergence rate of distributed detection consensus algorithms, improving over three orders of...small world type network designs. 14. SUBJECT TERMS Ramanujan graphs, sensor network topology, sensor network...that Ramanujan graphs, for which there are explicit algebraic constructions, have large eigenratios, converging much faster than structured graphs
Convergent and Divergent Validity of the Grammaticality and Utterance Length Instrument
ERIC Educational Resources Information Center
Castilla-Earls, Anny; Fulcher-Rood, Katrina
2018-01-01
Purpose: This feasibility study examines the convergent and divergent validity of the Grammaticality and Utterance Length Instrument (GLi), a tool designed to assess the grammaticality and average utterance length of a child's prerecorded story retell. Method: Three raters used the GLi to rate audio-recorded story retells from 100 English-speaking…
Naming game with biased assimilation over adaptive networks
NASA Astrophysics Data System (ADS)
Fu, Guiyuan; Zhang, Weidong
2018-01-01
The dynamics of two-word naming game incorporating the influence of biased assimilation over adaptive network is investigated in this paper. Firstly an extended naming game with biased assimilation (NGBA) is proposed. The hearer in NGBA accepts the received information in a biased manner, where he may refuse to accept the conveyed word from the speaker with a predefined probability, if the conveyed word is different from his current memory. Secondly, the adaptive network is formulated by rewiring the links. Theoretical analysis is developed to show that the population in NGBA will eventually reach global consensus on either A or B. Numerical simulation results show that the larger strength of biased assimilation on both words, the slower convergence speed, while larger strength of biased assimilation on only one word can slightly accelerate the convergence; larger population size can make the rate of convergence slower to a large extent when it increases from a relatively small size, while such effect becomes minor when the population size is large; the behavior of adaptively reconnecting the existing links can greatly accelerate the rate of convergence especially on the sparse connected network.
Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie
2017-12-01
The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The effect of zealots on the rate of consensus achievement in complex networks
NASA Astrophysics Data System (ADS)
Kashisaz, Hadi; Hosseini, S. Samira; Darooneh, Amir H.
2014-05-01
In this study, we investigate the role of zealots on the result of voting process on both scale-free and Watts-Strogatz networks. We observe that inflexible individuals are very effective in consensus achievement and also in the rate of ordering process in complex networks. Zealots make the magnetization of the system to vary exponentially with time. We obtain that on SF networks, increasing the zealots' population, Z, exponentially increases the rate of consensus achievement. The time needed for the system to reach a desired magnetization, shows a power-law dependence on Z. As well, we obtain that the decay time of the order parameter shows a power-law dependence on Z. We also investigate the role of zealots' degree on the rate of ordering process and finally, we analyze the effect of network's randomness on the efficiency of zealots. Moving from a regular to a random network, the re-wiring probability P increases. We show that with increasing P, the efficiency of zealots for reducing the consensus achievement time increases. The rate of consensus is compared with the rate of ordering for different re-wiring probabilities of WS networks.
Fenton, Tanis R; Anderson, Diane; Groh-Wargo, Sharon; Hoyos, Angela; Ehrenkranz, Richard A; Senterre, Thibault
2018-05-01
To examine how well growth velocity recommendations for preterm infants fit with current growth references: Fenton 2013, Olsen 2010, INTERGROWTH 2015, and the World Health Organization Growth Standard 2006. The Average (2-point), Exponential (2-point), Early (1-point) method weight-gains were calculated for 1,4,8,12, and 16-week time-periods. Growth references' weekly velocities (g/kg/d, gram/day and cm/week) were illustrated graphically with frequently-quoted 15 g/kg/d, 10-30 grams/day and 1 cm/week rates superimposed. The 15 g/kg/d and 1 cm/week growth velocity rates were calculated from 24-50 weeks, superimposed on the Fenton and Olsen preterm growth charts. The Average and Exponential g/kg/d estimates showed close agreement for all ages (range 5.0-18.9 g/kg/d), while the Early method yielded values as high as 41 g/kg/d. All 3 preterm growth references were similar to 15 g/kg/d rate at 34 weeks, but rates were higher prior and lower at older ages. For gram/day, the growth references changed from 10 to 30 grams/day for 24-33 weeks. Head growth rates generally fit the 1 cm/week velocity for 23-30 weeks, and length growth rates fit for 37-40 weeks. The calculated g/kg/d curves deviated from the growth charts, first downward, then steeply crossed the median curves near term. Human growth is not constant through gestation and early infancy. The frequently-quoted 15 g/kg/d, 10-30 gram/day and 1 cm/week only fit current growth references for limited time periods. Rates of 15-20 g/kg/d (calculated using average or exponential methods) are a reasonable goal for infants 23-36 weeks, but not beyond. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Huanfeng; Huang, Zaiyin; Xiao, Ming; Liang, Min; Chen, Liying; Tan, XueCai
2017-09-01
The activities, selectivities, and stabilities of nanoparticles in heterogeneous reactions are size-dependent. In order to investigate the influencing laws of particle size and temperature on kinetic parameters in heterogeneous reactions, cubic nano-Cu2O particles of four different sizes in the range of 40-120 nm have been controllably synthesized. In situ microcalorimetry has been used to attain thermodynamic data on the reaction of Cu2O with aqueous HNO3 and, combined with thermodynamic principles and kinetic transition-state theory, the relevant reaction kinetic parameters have been evaluated. The size dependences of the kinetic parameters are discussed in terms of the established kinetic model and the experimental results. It was found that the reaction rate constants increased with decreasing particle size. Accordingly, the apparent activation energy, pre-exponential factor, activation enthalpy, activation entropy, and activation Gibbs energy decreased with decreasing particle size. The reaction rate constants and activation Gibbs energies increased with increasing temperature. Moreover, the logarithms of the apparent activation energies, pre-exponential factors, and rate constants were found to be linearly related to the reciprocal of particle size, consistent with the kinetic models. The influence of particle size on these reaction kinetic parameters may be explained as follows: the apparent activation energy is affected by the partial molar enthalpy, the pre-exponential factor is affected by the partial molar entropy, and the reaction rate constant is affected by the partial molar Gibbs energy. [Figure not available: see fulltext.
Dou, Haiyang; Li, Yueqiu; Choi, Jaeyeong; Huo, Shuying; Ding, Liang; Shen, Shigang; Lee, Seungho
2016-09-23
The capability of asymmetrical flow field-flow fractionation (AF4) coupled with UV/VIS, multiangle light scattering (MALS) and quasi-elastic light scattering (QELS) (AF4-UV-MALS-QELS) for separation and characterization of egg yolk plasma was evaluated. The accuracy of hydrodynamic radius (Rh) obtained from QELS and AF4 theory (using both simplified and full expression of AF4 retention equations) was discussed. The conformation of low density lipoprotein (LDL) and its aggregates in egg yolk plasma was discussed based on the ratio of radius of gyration (Rg) to Rh together with the results from bio-transmission electron microscopy (Bio-TEM). The results indicate that the full retention equation is more relevant than simplified version for the Rh determination at high cross flow rate. The Rh from online QELS is reliable only at a specific range of sample concentration. The effect of programmed cross flow rate (linear and exponential decay) on the analysis of egg yolk plasma was also investigated. It was found that the use of an exponentially decaying cross flow rate not only reduces the AF4 analysis time of the egg yolk plasma, but also provides better resolution than the use of either a constant or linearly decaying cross flow rate. A combination of an exponentially decaying cross flow AF4-UV-MALS-QELS and the utilization of full retention equation was proved to be a useful method for the separation and characterization of egg yolk plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss.
Danovaro, Roberto; Gambi, Cristina; Dell'Anno, Antonio; Corinaldesi, Cinzia; Fraschetti, Simonetta; Vanreusel, Ann; Vincx, Magda; Gooday, Andrew J
2008-01-08
Recent investigations suggest that biodiversity loss might impair the functioning and sustainability of ecosystems. Although deep-sea ecosystems are the most extensive on Earth, represent the largest reservoir of biomass, and host a large proportion of undiscovered biodiversity, the data needed to evaluate the consequences of biodiversity loss on the ocean floor are completely lacking. Here, we present a global-scale study based on 116 deep-sea sites that relates benthic biodiversity to several independent indicators of ecosystem functioning and efficiency. We show that deep-sea ecosystem functioning is exponentially related to deep-sea biodiversity and that ecosystem efficiency is also exponentially linked to functional biodiversity. These results suggest that a higher biodiversity supports higher rates of ecosystem processes and an increased efficiency with which these processes are performed. The exponential relationships presented here, being consistent across a wide range of deep-sea ecosystems, suggest that mutually positive functional interactions (ecological facilitation) can be common in the largest biome of our biosphere. Our results suggest that a biodiversity loss in deep-sea ecosystems might be associated with exponential reductions of their functions. Because the deep sea plays a key role in ecological and biogeochemical processes at a global scale, this study provides scientific evidence that the conservation of deep-sea biodiversity is a priority for a sustainable functioning of the worlds' oceans.
Preliminary study of the use of the STAR-100 computer for transonic flow calculations
NASA Technical Reports Server (NTRS)
Keller, J. D.; Jameson, A.
1977-01-01
An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.
Comparison of cursive models for handwriting instruction.
Karlsdottir, R
1997-12-01
The efficiency of four different cursive handwriting styles as model alphabets for handwriting instruction of primary school children was compared in a cross-sectional field experiment from Grade 3 to 6 in terms of the average handwriting speed developed by the children and the average rate of convergence of the children's handwriting to the style of their model. It was concluded that styles with regular entry stroke patterns give the steadiest rate of convergence to the model and styles with short ascenders and descenders and strokes with not too high curvatures give the highest handwriting speed.
Convergent evolution of marine mammals is associated with distinct substitutions in common genes
Zhou, Xuming; Seim, Inge; Gladyshev, Vadim N.
2015-01-01
Phenotypic convergence is thought to be driven by parallel substitutions coupled with natural selection at the sequence level. Multiple independent evolutionary transitions of mammals to an aquatic environment offer an opportunity to test this thesis. Here, whole genome alignment of coding sequences identified widespread parallel amino acid substitutions in marine mammals; however, the majority of these changes were not unique to these animals. Conversely, we report that candidate aquatic adaptation genes, identified by signatures of likelihood convergence and/or elevated ratio of nonsynonymous to synonymous nucleotide substitution rate, are characterized by very few parallel substitutions and exhibit distinct sequence changes in each group. Moreover, no significant positive correlation was found between likelihood convergence and positive selection in all three marine lineages. These results suggest that convergence in protein coding genes associated with aquatic lifestyle is mainly characterized by independent substitutions and relaxed negative selection. PMID:26549748
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
NASA Astrophysics Data System (ADS)
Lin, C.; Accoroni, S.; Glibert, P. M.
2016-02-01
Mixotrophic grazing activity can be promoted in response to nutrient-enriched prey and this nutritional strategy is thought to be a factor in promoting growth of some toxic microalgae under nutrient limiting conditions for the mixotroph. However, it is unclear how the nutritional condition of the predator or the prey affects mixotrophic metabolism and, consequently, potential effects on the mixotroph that may, in turn, affect early life stages of bivalves. In laboratory experiments, we measured the grazing rate of the Karlodinium veneficum on Rhodomonas salina as prey, under varied nitrogen (N): phosphorus (P) stoichiometry of both predator and prey, and we compared the nutritionally-regulated effects of K. veneficum on larvae of the eastern oyster (Crassostrea virginia). Nutritionally sufficient, N-deficient, and P-deficient K. veneficum at two growth stages (exponential and stationary) were mixed with nutritionally sufficient, N-deficient, and P-deficient R. salina, in a factorial experimental design. Regardless of the nutritional condition of K. veneficum, it showed significantly higher grazing rates with N-rich prey in exponential stage and P-rich prey in stationary stage. Maximum grazing rates of N-deficient K. veneficum on N-rich prey in exponential stage were 20-fold larger than those nutritionally sufficient K. veneficum on N-rich prey. Significantly increased larval mortality was observed in 2-day exposures to monocultures of P-deficient K. veneficum at both stages. When mixed with P-deficient (or N-rich) prey, the presence of K. veneficum resulted in significantly enhanced larval mortality, but this was not the case for N-deficient K. veneficum in exponential stage. Mixotrophic feeding for K. veneficum may not only provide nutrition flexibility needed to persist bloom but appears to increase the negative effects of K. veneficum on the survival of oyster larvae.
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy
Davis, Michael L.; Mounteer, Leslie C.; Stevens, Lindsey K.; Miller, Charles D.; Zhou, Anhong
2011-01-01
Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout exponential growth phase (~20.9 µm/sec), while maximum velocities peak early in exponential growth phase at a velocity of 51.2 µm/sec. Pseudomonas putida KT2440 also favor smaller turn angles indicating they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. PMID:21334971
Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.
Yan, Wanfeng; van Tuyll van Serooskerken, Edgar
2015-01-01
Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.
Stochastic Computations in Cortical Microcircuit Models
Maass, Wolfgang
2013-01-01
Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126
NASA Technical Reports Server (NTRS)
Linares, Irving; Mersereau, Russell M.; Smith, Mark J. T.
1994-01-01
Two representative sample images of Band 4 of the Landsat Thematic Mapper are compressed with the JPEG algorithm at 8:1, 16:1 and 24:1 Compression Ratios for experimental browsing purposes. We then apply the Optimal PSNR Estimated Spectra Adaptive Postfiltering (ESAP) algorithm to reduce the DCT blocking distortion. ESAP reduces the blocking distortion while preserving most of the image's edge information by adaptively postfiltering the decoded image using the block's spectral information already obtainable from each block's DCT coefficients. The algorithm iteratively applied a one dimensional log-sigmoid weighting function to the separable interpolated local block estimated spectra of the decoded image until it converges to the optimal PSNR with respect to the original using a 2-D steepest ascent search. Convergence is obtained in a few iterations for integer parameters. The optimal logsig parameters are transmitted to the decoder as a negligible byte of overhead data. A unique maxima is guaranteed due to the 2-D asymptotic exponential overshoot shape of the surface generated by the algorithm. ESAP is based on a DFT analysis of the DCT basis functions. It is implemented with pixel-by-pixel spatially adaptive separable FIR postfilters. PSNR objective improvements between 0.4 to 0.8 dB are shown together with their corresponding optimal PSNR adaptive postfiltered images.
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy.
Zelyak, O; Fallone, B G; St-Aubin, J
2017-12-14
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.
Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy
NASA Astrophysics Data System (ADS)
Zelyak, O.; Fallone, B. G.; St-Aubin, J.
2018-01-01
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low-density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation.