Sample records for regularize exponential function

  1. Thermodynamics and glassy phase transition of regular black holes

    NASA Astrophysics Data System (ADS)

    Javed, Wajiha; Yousaf, Z.; Akhtar, Zunaira

    2018-05-01

    This paper is aimed to study thermodynamical properties of phase transition for regular charged black holes (BHs). In this context, we have considered two different forms of BH metrics supplemented with exponential and logistic distribution functions and investigated the recent expansion of phase transition through grand canonical ensemble. After exploring the corresponding Ehrenfest’s equation, we found the second-order background of phase transition at critical points. In order to check the critical behavior of regular BHs, we have evaluated some corresponding explicit relations for the critical temperature, pressure and volume and draw certain graphs with constant values of Smarr’s mass. We found that for the BH metric with exponential configuration function, the phase transition curves are divergent near the critical points, while glassy phase transition has been observed for the Ayón-Beato-García-Bronnikov (ABGB) BH in n = 5 dimensions.

  2. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  3. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  4. Convex foundations for generalized MaxEnt models

    NASA Astrophysics Data System (ADS)

    Frongillo, Rafael; Reid, Mark D.

    2014-12-01

    We present an approach to maximum entropy models that highlights the convex geometry and duality of generalized exponential families (GEFs) and their connection to Bregman divergences. Using our framework, we are able to resolve a puzzling aspect of the bijection of Banerjee and coauthors between classical exponential families and what they call regular Bregman divergences. Their regularity condition rules out all but Bregman divergences generated from log-convex generators. We recover their bijection and show that a much broader class of divergences correspond to GEFs via two key observations: 1) Like classical exponential families, GEFs have a "cumulant" C whose subdifferential contains the mean: Eo˜pθ[φ(o)]∈∂C(θ) ; 2) Generalized relative entropy is a C-Bregman divergence between parameters: DF(pθ,pθ')= D C(θ,θ') , where DF becomes the KL divergence for F = -H. We also show that every incomplete market with cost function C can be expressed as a complete market, where the prices are constrained to be a GEF with cumulant C. This provides an entirely new interpretation of prediction markets, relating their design back to the principle of maximum entropy.

  5. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    NASA Astrophysics Data System (ADS)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodwin, D. L.; Kuprov, Ilya, E-mail: i.kuprov@soton.ac.uk

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrixmore » exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.« less

  7. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  8. Modified Newton-Raphson GRAPE methods for optimal control of spin systems

    NASA Astrophysics Data System (ADS)

    Goodwin, D. L.; Kuprov, Ilya

    2016-05-01

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorbachev, D V; Ivanov, V I

    Gauss and Markov quadrature formulae with nodes at zeros of eigenfunctions of a Sturm-Liouville problem, which are exact for entire functions of exponential type, are established. They generalize quadrature formulae involving zeros of Bessel functions, which were first designed by Frappier and Olivier. Bessel quadratures correspond to the Fourier-Hankel integral transform. Some other examples, connected with the Jacobi integral transform, Fourier series in Jacobi orthogonal polynomials and the general Sturm-Liouville problem with regular weight are also given. Bibliography: 39 titles.

  10. Exact solutions of unsteady Korteweg-de Vries and time regularized long wave equations.

    PubMed

    Islam, S M Rayhanul; Khan, Kamruzzaman; Akbar, M Ali

    2015-01-01

    In this paper, we implement the exp(-Φ(ξ))-expansion method to construct the exact traveling wave solutions for nonlinear evolution equations (NLEEs). Here we consider two model equations, namely the Korteweg-de Vries (KdV) equation and the time regularized long wave (TRLW) equation. These equations play significant role in nonlinear sciences. We obtained four types of explicit function solutions, namely hyperbolic, trigonometric, exponential and rational function solutions of the variables in the considered equations. It has shown that the applied method is quite efficient and is practically well suited for the aforementioned problems and so for the other NLEEs those arise in mathematical physics and engineering fields. PACS numbers: 02.30.Jr, 02.70.Wz, 05.45.Yv, 94.05.Fq.

  11. Existence and energy decay of a nonuniform Timoshenko system with second sound

    NASA Astrophysics Data System (ADS)

    Hamadouche, Taklit; Messaoudi, Salim A.

    2018-02-01

    In this paper, we consider a linear thermoelastic Timoshenko system with variable physical parameters, where the heat conduction is given by Cattaneo's law and the coupling is via the displacement equation. We discuss the well-posedness and the regularity of solution using the semigroup theory. Moreover, we establish the exponential decay result provided that the stability function χ r(x)=0. Otherwise, we show that the solution decays polynomially.

  12. On linear Landau Damping for relativistic plasmas via Gevrey regularity

    NASA Astrophysics Data System (ADS)

    Young, Brent

    2015-10-01

    We examine the phenomenon of Landau Damping in relativistic plasmas via a study of the relativistic Vlasov-Poisson system (both on the torus and on R3) linearized around a sufficiently nice, spatially uniform kinetic equilibrium. We find that exponential decay of spatial Fourier modes is impossible under modest symmetry assumptions. However, by assuming the equilibrium and initial data are sufficiently regular functions of velocity for a given wavevector (in particular that they exhibit a kind of Gevrey regularity), we show that it is possible for the mode associated to this wavevector to decay like exp ⁡ (-| t | δ) (with 0 < δ < 1) if the magnitude of the wavevector exceeds a certain critical size which depends on the character of the interaction. We also give a heuristic argument why one should not expect such rapid decay for modes with wavevectors below this threshold.

  13. We'll Meet Again: Revealing Distributional and Temporal Patterns of Social Contact

    PubMed Central

    Pachur, Thorsten; Schooler, Lael J.; Stevens, Jeffrey R.

    2014-01-01

    What are the dynamics and regularities underlying social contact, and how can contact with the people in one's social network be predicted? In order to characterize distributional and temporal patterns underlying contact probability, we asked 40 participants to keep a diary of their social contacts for 100 consecutive days. Using a memory framework previously used to study environmental regularities, we predicted that the probability of future contact would follow in systematic ways from the frequency, recency, and spacing of previous contact. The distribution of contact probability across the members of a person's social network was highly skewed, following an exponential function. As predicted, it emerged that future contact scaled linearly with frequency of past contact, proportionally to a power function with recency of past contact, and differentially according to the spacing of past contact. These relations emerged across different contact media and irrespective of whether the participant initiated or received contact. We discuss how the identification of these regularities might inspire more realistic analyses of behavior in social networks (e.g., attitude formation, cooperation). PMID:24475073

  14. Plasmodial vein networks of the slime mold Physarum polycephalum form regular graphs

    NASA Astrophysics Data System (ADS)

    Baumgarten, Werner; Ueda, Tetsuo; Hauser, Marcus J. B.

    2010-10-01

    The morphology of a typical developing biological transportation network, the vein network of the plasmodium of the myxomycete Physarum polycephalum is analyzed during its free extension. The network forms a classical, regular graph, and has exclusively nodes of degree 3. This contrasts to most real-world transportation networks which show small-world or scale-free properties. The complexity of the vein network arises from the weighting of the lengths, widths, and areas of the vein segments. The lengths and areas follow exponential distributions, while the widths are distributed log-normally. These functional dependencies are robust during the entire evolution of the network, even though the exponents change with time due to the coarsening of the vein network.

  15. Plasmodial vein networks of the slime mold Physarum polycephalum form regular graphs.

    PubMed

    Baumgarten, Werner; Ueda, Tetsuo; Hauser, Marcus J B

    2010-10-01

    The morphology of a typical developing biological transportation network, the vein network of the plasmodium of the myxomycete Physarum polycephalum is analyzed during its free extension. The network forms a classical, regular graph, and has exclusively nodes of degree 3. This contrasts to most real-world transportation networks which show small-world or scale-free properties. The complexity of the vein network arises from the weighting of the lengths, widths, and areas of the vein segments. The lengths and areas follow exponential distributions, while the widths are distributed log-normally. These functional dependencies are robust during the entire evolution of the network, even though the exponents change with time due to the coarsening of the vein network.

  16. Black-hole solutions with scalar hair in Einstein-scalar-Gauss-Bonnet theories

    NASA Astrophysics Data System (ADS)

    Antoniou, G.; Bakopoulos, A.; Kanti, P.

    2018-04-01

    In the context of the Einstein-scalar-Gauss-Bonnet theory, with a general coupling function between the scalar field and the quadratic Gauss-Bonnet term, we investigate the existence of regular black-hole solutions with scalar hair. Based on a previous theoretical analysis, which studied the evasion of the old and novel no-hair theorems, we consider a variety of forms for the coupling function (exponential, even and odd polynomial, inverse polynomial, and logarithmic) that, in conjunction with the profile of the scalar field, satisfy a basic constraint. Our numerical analysis then always leads to families of regular, asymptotically flat black-hole solutions with nontrivial scalar hair. The solution for the scalar field and the profile of the corresponding energy-momentum tensor, depending on the value of the coupling constant, may exhibit a nonmonotonic behavior, an unusual feature that highlights the limitations of the existing no-hair theorems. We also determine and study in detail the scalar charge, horizon area, and entropy of our solutions.

  17. Quantum mechanics of conformally and minimally coupled Friedmann-Robertson-Walker cosmology

    NASA Astrophysics Data System (ADS)

    Kim, Sang Pyo

    1992-10-01

    The expansion method by a time-dependent basis of the eigenfunctions for the space-coordinate-dependent sub-Hamiltonian is one of the most natural frameworks for quantum systems, relativistic as well as nonrelativistic. The complete set of wave functions is found in the product integral formulation, whose constants of integration are fixed by Cauchy initial data. The wave functions for the Friedmann-Robertson-Walker (FRW) cosmology conformally and minimally coupled to a scalar field with a power-law potential or a polynomial potential are expanded in terms of the eigenfunctions of the scalar field sub-Hamiltonian part. The resultant gravitational field part which is an ``intrinsic'' timelike variable-dependent matrix-valued differential equation is solved again in the product integral formulation. There are classically allowed regions for the ``intrinsic'' timelike variable depending on the scalar field quantum numbers and these regions increase accordingly as the quantum numbers increase. For a fixed large three-geometry the wave functions corresponding to the low excited (small quantum number) states of the scalar field are exponentially damped or diverging and the wave functions corresponding to the high excited (large quantum number) states are still oscillatory but become eventually exponential as the three-geometry becomes larger. Furthermore, a proposal is advanced that the wave functions exponentially damped for a large three-geometry may be interpreted as ``tunneling out'' wave functions into, and the wave functions exponentially diverging as ``tunneling in'' from, different universes with the same or different topologies, the former being interpreted as the recently proposed Hawking-Page wormhole wave functions. It is observed that there are complex as well as Euclidean actions depending on the quantum numbers of the scalar field part outside the classically allowed region both of the gravitational and scalar fields, suggesting the usefulness of complex geometry and complex trajectories. From the most general wave functions for the FRW cosmology conformally coupled to scalar field, the boundary conditions for the wormhole wave functions are modified so that the modulus of wave functions, instead of the wave functions themselves, should be exponentially damped for a large three-geometry and be regular up to some negative power of the three-geometry as the three-geometry collapses. The wave functions for the FRW cosmology minimally coupled to an inhomogeneous scalar field are similarly found in the product integral formulation. The role of a large number of the inhomogeneous modes of the scalar field is not only to increase the classically allowed regions for the gravitational part but also to provide a mechanism of the decoherence of quantum interferences between the different sizes of the universe.

  18. Exponential Nutrient Loading as a Means to Optimize Bareroot Nursery Fertility of Oak Species

    Treesearch

    Zonda K. D. Birge; Douglass F. Jacobs; Francis K. Salifu

    2006-01-01

    Conventional fertilization in nursery culture of hardwoods may involve supply of equal fertilizer doses at regularly spaced intervals during the growing season, which may create a surplus of available nutrients in the beginning and a deficiency in nutrient availability by the end of the growing season. A method of fertilization termed “exponential nutrient loading” has...

  19. Exponential Decay of Dispersion-Managed Solitons for General Dispersion Profiles

    NASA Astrophysics Data System (ADS)

    Green, William R.; Hundertmark, Dirk

    2016-02-01

    We show that any weak solution of the dispersion management equation describing dispersion-managed solitons together with its Fourier transform decay exponentially. This strong regularity result extends a recent result of Erdoğan, Hundertmark, and Lee in two directions, to arbitrary non-negative average dispersion and, more importantly, to rather general dispersion profiles, which cover most, if not all, physically relevant cases.

  20. An Exponential Regulator for Rapidity Divergences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ye; Neill, Duff; Zhu, Hua Xing

    2016-04-01

    Finding an efficient and compelling regularization of soft and collinear degrees of freedom at the same invariant mass scale, but separated in rapidity is a persistent problem in high-energy factorization. In the course of a calculation, one encounters divergences unregulated by dimensional regularization, often called rapidity divergences. Once regulated, a general framework exists for their renormalization, the rapidity renormalization group (RRG), leading to fully resummed calculations of transverse momentum (to the jet axis) sensitive quantities. We examine how this regularization can be implemented via a multi-differential factorization of the soft-collinear phase-space, leading to an (in principle) alternative non-perturbative regularization ofmore » rapidity divergences. As an example, we examine the fully-differential factorization of a color singlet's momentum spectrum in a hadron-hadron collision at threshold. We show how this factorization acts as a mother theory to both traditional threshold and transverse momentum resummation, recovering the classical results for both resummations. Examining the refactorization of the transverse momentum beam functions in the threshold region, we show that one can directly calculate the rapidity renormalized function, while shedding light on the structure of joint resummation. Finally, we show how using modern bootstrap techniques, the transverse momentum spectrum is determined by an expansion about the threshold factorization, leading to a viable higher loop scheme for calculating the relevant anomalous dimensions for the transverse momentum spectrum.« less

  1. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  2. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  3. Interprocedural Analysis and the Verification of Concurrent Programs

    DTIC Science & Technology

    2009-01-01

    SSPE ) problem is to compute a regular expression that represents paths(s, v) for all vertices v in the graph. The syntax of regular expressions is as...follows: r ::= ∅ | ε | e | r1 ∪ r2 | r1.r2 | r∗, where e stands for an edge in G. We can use any algorithm for SSPE to compute regular expressions for...a closed representation of loops provides an exponential speedup.2 Tarjan’s path-expression algorithm solves the SSPE problem efficiently. It uses

  4. Subordination to periodic processes and synchronization

    NASA Astrophysics Data System (ADS)

    Ascolani, Gianluca; Bologna, Mauro; Grigolini, Paolo

    2009-07-01

    We study the subordination to a process that is periodic in the natural time scale, and equivalent to a clock with N states. The rationale for this investigation is given by a set of many interacting clocks with N states. The natural time scale representation corresponds to the dynamics of an individual clock with no interaction with the other clocks of this set. We argue that the cooperation among the clocks of this set has the effect of generating a global clock, whose times of sojourn in each of its N states are described by a distribution density with an inverse power law form and power index μ<2. This is equivalent to extending the widely used subordination method from fluctuation-dissipation processes to periodic processes, thereby raising the question of whether special conditions exist of perfect synchronization, signaled by regular oscillations, and especially by oscillations with no damping. We study first the case of a Poisson subordination function. We show that in spite of the random nature of the subordination method the procedure has the effect of creating damped oscillations, whose damping vanishes in the limiting case of N≫1, thereby suggesting a condition of perfect synchronization in this limit. The Bateman’s mathematical arguments [H. Bateman, Higher Transcendental Functions, vol. III, Robert K Krieger, Publishing Company, Inc. Krim.Fr. Drive Malabar, FL; Copyright 1953 by McGraw-Hill Book Company Inc.] indicate that the condition of perfect synchronization is possible also in the non-Poisson case, with μ<2, although it may lie beyond the range of computer simulation. To make the theoretical predictions accessible to numerical simulation, we use a subordination function whose survival probability is a Mittag-Leffler exponential function. This method prevents us from directly establishing the macroscopic coherence emerging from μ=2, which generates a perfect form of 1/f noise. However, it affords indirect evidence that perfect synchronization signaled by undamped regular oscillations may be produced in this case. Furthermore, we explore a condition characterized by an excellent agreement between theory and numerical simulation, where the long-time region relaxation, with a perfect inverse power law decay, emerging from the subordination to ordinary fluctuation-dissipation processes, is replaced by exponentially damped regular oscillations.

  5. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  6. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  7. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  8. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  9. Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm

    NASA Astrophysics Data System (ADS)

    Karaca, Yeliz; Cattani, Carlo

    Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.

  10. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Lorelli, S.; Cabot, A.; Sundarprasad, N.; Boekema, C.

    Using computer modeling we study frustration in condensed matter and protein folding. Frustration is due to random and/or competing interactions. One definition of frustration is the sum of squares of the differences between actual and expected distances between characters. If this sum is non-zero, then the system is said to have frustration. A simulation tracks the movement of characters to lower their frustration. Our research is conducted on frustration as a function of temperature using a logarithmic scale. At absolute zero, the relaxation for frustration is a power function for randomly assigned patterns or an exponential function for regular patterns like Thomson figures. These findings have implications for protein folding; we attempt to apply our frustration modeling to protein folding and dynamics. We use coding in Python to simulate different ways a protein can fold. An algorithm is being developed to find the lowest frustration (and thus energy) states possible. Research supported by SJSU & AFC.

  11. Equivalences between nonuniform exponential dichotomy and admissibility

    NASA Astrophysics Data System (ADS)

    Zhou, Linfeng; Lu, Kening; Zhang, Weinian

    2017-01-01

    Relationship between exponential dichotomies and admissibility of function classes is a significant problem for hyperbolic dynamical systems. It was proved that a nonuniform exponential dichotomy implies several admissible pairs of function classes and conversely some admissible pairs were found to imply a nonuniform exponential dichotomy. In this paper we find an appropriate admissible pair of classes of Lyapunov bounded functions which is equivalent to the existence of nonuniform exponential dichotomy on half-lines R± separately, on both half-lines R± simultaneously, and on the whole line R. Additionally, the maximal admissibility is proved in the case on both half-lines R± simultaneously.

  12. Graphical analysis for gel morphology II. New mathematical approach for stretched exponential function with β>1

    NASA Astrophysics Data System (ADS)

    Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu

    2005-10-01

    A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.

  13. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

  14. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  15. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...

    2017-10-24

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  16. Calculation of Rate Spectra from Noisy Time Series Data

    PubMed Central

    Voelz, Vincent A.; Pande, Vijay S.

    2011-01-01

    As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854

  17. Unruh effect for general trajectories

    NASA Astrophysics Data System (ADS)

    Obadia, N.; Milgrom, M.

    2007-03-01

    We consider two-level detectors coupled to a scalar field and moving on arbitrary trajectories in Minkowski space-time. We first derive a generic expression for the response function using a (novel) regularization procedure based on the Feynman prescription that is explicitly causal, and we compare it to other expressions used in the literature. We then use this expression to study, analytically and numerically, the time dependence of the response function in various nonstationarity situations. We show that, generically, the response function decreases like a power in the detector’s level spacing, E, for high E. It is only for stationary worldlines that the response function decays faster than any power law, in keeping with the known exponential behavior for some stationary cases. Under some conditions the (time-dependent) response function for a nonstationary worldline is well approximated by the value of the response function for a stationary worldline having the same instantaneous acceleration, torsion, and hypertorsion. While we cannot offer general conditions for this to apply, we discuss special cases; in particular, the low-energy limit for linear space trajectories.

  18. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  19. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Li, Z.; Tanner, S.; Conroy, B.; Owens, F.; Tran, M. M.; Boekema, C.

    2014-03-01

    By means of computer modeling, we are studying frustration in condensed matter and protein folding, including the influence of temperature and Thomson-figure formation. Frustration is due to competing interactions in a disordered state. The key issue is how the particles interact to reach the lowest frustration. The relaxation for frustration is mostly a power function (randomly assigned pattern) or an exponential function (regular patterns like Thomson figures). For the atomic Thomson model, frustration is predicted to decrease with the formation of Thomson figures at zero kelvin. We attempt to apply our frustration modeling to protein folding and dynamics. We investigate the homogeneous protein frustration that would cause the speed of the protein folding to increase. Increase of protein frustration (where frustration and hydrophobicity interplay with protein folding) may lead to a protein mutation. Research is supported by WiSE@SJSU and AFC San Jose.

  20. Critical behavior of the contact process in a multiscale network

    NASA Astrophysics Data System (ADS)

    Ferreira, Silvio C.; Martins, Marcelo L.

    2007-09-01

    Inspired by dengue and yellow fever epidemics, we investigated the contact process (CP) in a multiscale network constituted by one-dimensional chains connected through a Barabási-Albert scale-free network. In addition to the CP dynamics inside the chains, the exchange of individuals between connected chains (travels) occurs at a constant rate. A finite epidemic threshold and an epidemic mean lifetime diverging exponentially in the subcritical phase, concomitantly with a power law divergence of the outbreak’s duration, were found. A generalized scaling function involving both regular and SF components was proposed for the quasistationary analysis and the associated critical exponents determined, demonstrating that the CP on this hybrid network and nonvanishing travel rates establishes a new universality class.

  1. Anharmonic Potential Constants and Their Dependence Upon Bond Length

    DOE R&D Accomplishments Database

    Herschbach, D. R.; Laurie, V. W.

    1961-01-01

    Empirical study of cubic and quartic vibrational force constants for diatomic molecules shows them to be approximately exponential functions of internuclear distance. A family of curves is obtained, determined by the location of the bonded atoms in rows of the periodic table. Displacements between successive curves correspond closely to those in Badger's rule for quadratic force constants (for which the parameters are redetermined to accord with all data now available). Constants for excited electronic and ionic states appear on practically the same curves as those for the ground states. Predictions based on the diatomic correlations agree with the available cubic constants for bond stretching in polyatomic molecules, regardless of the type of bonding involved. Implications of these regularities are discussed. (auth)

  2. Exponential localization of Wannier functions in insulators.

    PubMed

    Brouder, Christian; Panati, Gianluca; Calandra, Matteo; Mourougane, Christophe; Marzari, Nicola

    2007-01-26

    The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.

  3. Integral definition of the logarithmic function and the derivative of the exponential function in calculus

    NASA Astrophysics Data System (ADS)

    Vaninsky, Alexander

    2015-04-01

    Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.

  4. From webs to polylogarithms

    NASA Astrophysics Data System (ADS)

    Gardi, Einan

    2014-04-01

    We compute a class of diagrams contributing to the multi-leg soft anomalous dimension through three loops, by renormalizing a product of semi-infinite non-lightlike Wilson lines in dimensional regularization. Using non-Abelian exponentiation we directly compute contributions to the exponent in terms of webs. We develop a general strategy to compute webs with multiple gluon exchanges between Wilson lines in configuration space, and explore their analytic structure in terms of α ij , the exponential of the Minkowski cusp angle formed between the lines i and j. We show that beyond the obvious inversion symmetry α ij → 1 /α ij , at the level of the symbol the result also admits a crossing symmetry α ij → - α ij , relating spacelike and timelike kinematics, and hence argue that in this class of webs the symbol alphabet is restricted to α ij and . We carry out the calculation up to three gluons connecting four Wilson lines, finding that the contributions to the soft anomalous dimension are remarkably simple: they involve pure functions of uniform weight, which are written as a sum of products of polylogarithms, each depending on a single cusp angle. We conjecture that this type of factorization extends to all multiple-gluon-exchange contributions to the anomalous dimension.

  5. (q,{mu}) and (p,q,{zeta})-exponential functions: Rogers-Szego'' polynomials and Fourier-Gauss transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hounkonnou, Mahouton Norbert; Nkouankam, Elvis Benzo Ngompe

    2010-10-15

    From the realization of q-oscillator algebra in terms of generalized derivative, we compute the matrix elements from deformed exponential functions and deduce generating functions associated with Rogers-Szego polynomials as well as their relevant properties. We also compute the matrix elements associated with the (p,q)-oscillator algebra (a generalization of the q-one) and perform the Fourier-Gauss transform of a generalization of the deformed exponential functions.

  6. The dynamics of photoinduced defect creation in amorphous chalcogenides: The origin of the stretched exponential function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193

    The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.

  7. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  8. On the Matrix Exponential Function

    ERIC Educational Resources Information Center

    Hou, Shui-Hung; Hou, Edwin; Pang, Wan-Kai

    2006-01-01

    A novel and simple formula for computing the matrix exponential function is presented. Specifically, it can be used to derive explicit formulas for the matrix exponential of a general matrix A satisfying p(A) = 0 for a polynomial p(s). It is ready for use in a classroom and suitable for both hand as well as symbolic computation.

  9. On the Gibbs phenomenon 1: Recovering exponential accuracy from the Fourier partial sum of a non-periodic analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve

    1992-01-01

    It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.

  10. Entanglement properties of boundary state and thermalization

    NASA Astrophysics Data System (ADS)

    Guo, Wu-zhong

    2018-06-01

    We discuss the regularized boundary state {e}^{-{τ}_0H}\\Big|{.B>}_a on two aspects in both 2D CFT and higher dimensional free field theory. One is its entanglement and correlation properties, which exhibit exponential decay in 2D CFT, the parameter 1 /τ 0 works as a mass scale. The other concerns with its time evolution, i.e., {e}^{-itH}{e}^{-{τ}_0H}\\Big|{.B>}_a . We investigate the Kubo-Martin-Schwinger (KMS) condition on correlation function of local operators to detect the thermal properties. Interestingly we find the correlation functions in the initial state {e}^{-{τ}_0H}\\Big|{.B>}_a also partially satisfy the KMS condition. In the limit t → ∞, the correlators will exactly satisfy the KMS condition. We generally analyse quantum quench by a pure state and obtain some constraints on the possible form of 2-point correlation function in the initial state if assuming they satisfies KMS condition in the final state. As a byproduct we find in an large τ 0 limit the thermal property of 2-point function in {e}^{-{τ}_0H}\\Big|{.B>}_a also appears.

  11. Using Differentials to Differentiate Trigonometric and Exponential Functions

    ERIC Educational Resources Information Center

    Dray, Tevian

    2013-01-01

    Starting from geometric definitions, we show how differentials can be used to differentiate trigonometric and exponential functions without limits, numerical estimates, solutions of differential equations, or integration.

  12. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  13. An Unusual Exponential Graph

    ERIC Educational Resources Information Center

    Syed, M. Qasim; Lovatt, Ian

    2014-01-01

    This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…

  14. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  15. R-Function Relationships for Application in the Fractional Calculus

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.; Hartley, Tom T.

    2000-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, e(t), and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, el, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  16. R-function relationships for application in the fractional calculus.

    PubMed

    Lorenzo, Carl F; Hartley, Tom T

    2008-01-01

    The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, et, and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, et, in terms of the R-function are developed. Also, some approximations for the R-function are developed.

  17. Correction of engineering servicing regularity of transporttechnological machines in operational process

    NASA Astrophysics Data System (ADS)

    Makarova, A. N.; Makarov, E. I.; Zakharov, N. S.

    2018-03-01

    In the article, the issue of correcting engineering servicing regularity on the basis of actual dependability data of cars in operation is considered. The purpose of the conducted research is to increase dependability of transport-technological machines by correcting engineering servicing regularity. The subject of the research is the mechanism of engineering servicing regularity influence on reliability measure. On the basis of the analysis of researches carried out before, a method of nonparametric estimation of car failure measure according to actual time-to-failure data was chosen. A possibility of describing the failure measure dependence on engineering servicing regularity by various mathematical models is considered. It is proven that the exponential model is the most appropriate for that purpose. The obtained results can be used as a separate method of engineering servicing regularity correction with certain operational conditions taken into account, as well as for the technical-economical and economical-stochastic methods improvement. Thus, on the basis of the conducted researches, a method of engineering servicing regularity correction of transport-technological machines in the operational process was developed. The use of that method will allow decreasing the number of failures.

  18. Adiabatic regularization of the power spectrum in nonminimally coupled general single-field inflation

    NASA Astrophysics Data System (ADS)

    Alinea, Allan L.; Kubota, Takahiro

    2018-03-01

    We perform adiabatic regularization of power spectrum in nonminimally coupled general single-field inflation with varying speed of sound. The subtraction is performed within the framework of earlier study by Urakawa and Starobinsky dealing with the canonical inflation. Inspired by Fakir and Unruh's model on nonminimally coupled chaotic inflation, we find upon imposing near scale-invariant condition, that the subtraction term exponentially decays with the number of e -folds. As in the result for the canonical inflation, the regularized power spectrum tends to the "bare" power spectrum as the Universe expands during (and even after) inflation. This work justifies the use of the "bare" power spectrum in standard calculation in the most general context of slow-roll single-field inflation involving nonminimal coupling and varying speed of sound.

  19. Points of View: A Survey of Survey Courses--Are They Effective? Argument Favoring a Survey as the First Course for Majors

    ERIC Educational Resources Information Center

    Ledbetter, Mary Lee; Campbell, A. Malcolm

    2005-01-01

    Reasonable people disagree about how to introduce undergraduate students to the marvels and complexities of the biological sciences. With intrinsically varied subdisciplines within biology, exponentially growing bases of information, and new unifying theories rising regularly, introduction to the curriculum is a challenge. Some decide to focus…

  20. Expert Consensus on Barriers to College and University Online Education for Students with Blindness and Low Vision

    ERIC Educational Resources Information Center

    Pavithran, Sachin D.

    2017-01-01

    Online education courses have increased exponentially over the last twenty years. These courses provide opportunities for education to students that may find attending in a regular classroom difficult, if not impossible. The number of students with disabilities enrolling in online education courses is also increasing. However, because of the mode…

  1. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  2. Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss.

    PubMed

    Danovaro, Roberto; Gambi, Cristina; Dell'Anno, Antonio; Corinaldesi, Cinzia; Fraschetti, Simonetta; Vanreusel, Ann; Vincx, Magda; Gooday, Andrew J

    2008-01-08

    Recent investigations suggest that biodiversity loss might impair the functioning and sustainability of ecosystems. Although deep-sea ecosystems are the most extensive on Earth, represent the largest reservoir of biomass, and host a large proportion of undiscovered biodiversity, the data needed to evaluate the consequences of biodiversity loss on the ocean floor are completely lacking. Here, we present a global-scale study based on 116 deep-sea sites that relates benthic biodiversity to several independent indicators of ecosystem functioning and efficiency. We show that deep-sea ecosystem functioning is exponentially related to deep-sea biodiversity and that ecosystem efficiency is also exponentially linked to functional biodiversity. These results suggest that a higher biodiversity supports higher rates of ecosystem processes and an increased efficiency with which these processes are performed. The exponential relationships presented here, being consistent across a wide range of deep-sea ecosystems, suggest that mutually positive functional interactions (ecological facilitation) can be common in the largest biome of our biosphere. Our results suggest that a biodiversity loss in deep-sea ecosystems might be associated with exponential reductions of their functions. Because the deep sea plays a key role in ecological and biogeochemical processes at a global scale, this study provides scientific evidence that the conservation of deep-sea biodiversity is a priority for a sustainable functioning of the worlds' oceans.

  3. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  4. The Exponential Function--Part VIII

    ERIC Educational Resources Information Center

    Bartlett, Albert A.

    1978-01-01

    Presents part eight of a continuing series on the exponential function in which, given the current population of the Earth and assuming a constant growth rate of 1.9 percent backward looks at world population are made. (SL)

  5. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  6. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  7. Collapse of a self-similar cylindrical scalar field with non-minimal coupling II: strong cosmic censorship

    NASA Astrophysics Data System (ADS)

    Condron, Eoin; Nolan, Brien C.

    2014-08-01

    We investigate self-similar scalar field solutions to the Einstein equations in whole cylinder symmetry. Imposing self-similarity on the spacetime gives rise to a set of single variable functions describing the metric. Furthermore, it is shown that the scalar field is dependent on a single unknown function of the same variable and that the scalar field potential has exponential form. The Einstein equations then take the form of a set of ODEs. Self-similarity also gives rise to a singularity at the scaling origin. We extend the work of Condron and Nolan (2014 Class. Quantum Grav. 31 015015), which determined the global structure of all solutions with a regular axis in the causal past of the singularity. We identified a class of solutions that evolves through the past null cone of the singularity. We give the global structure of these solutions and show that the singularity is censored in all cases.

  8. Forecasting Epidemics Through Nonparametric Estimation of Time-Dependent Transmission Rates Using the SEIR Model.

    PubMed

    Smirnova, Alexandra; deCamp, Linda; Chowell, Gerardo

    2017-05-02

    Deterministic and stochastic methods relying on early case incidence data for forecasting epidemic outbreaks have received increasing attention during the last few years. In mathematical terms, epidemic forecasting is an ill-posed problem due to instability of parameter identification and limited available data. While previous studies have largely estimated the time-dependent transmission rate by assuming specific functional forms (e.g., exponential decay) that depend on a few parameters, here we introduce a novel approach for the reconstruction of nonparametric time-dependent transmission rates by projecting onto a finite subspace spanned by Legendre polynomials. This approach enables us to effectively forecast future incidence cases, the clear advantage over recovering the transmission rate at finitely many grid points within the interval where the data are currently available. In our approach, we compare three regularization algorithms: variational (Tikhonov's) regularization, truncated singular value decomposition (TSVD), and modified TSVD in order to determine the stabilizing strategy that is most effective in terms of reliability of forecasting from limited data. We illustrate our methodology using simulated data as well as case incidence data for various epidemics including the 1918 influenza pandemic in San Francisco and the 2014-2015 Ebola epidemic in West Africa.

  9. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  10. Improving ATLAS grid site reliability with functional tests using HammerCloud

    NASA Astrophysics Data System (ADS)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  11. Decision Support System for hydrological extremes

    NASA Astrophysics Data System (ADS)

    Bobée, Bernard; El Adlouni, Salaheddine

    2014-05-01

    The study of the tail behaviour of extreme event distributions is important in several applied statistical fields such as hydrology, finance, and telecommunications. For example in hydrology, it is important to estimate adequately extreme quantiles in order to build and manage safe and effective hydraulic structures (dams, for example). Two main classes of distributions are used in hydrological frequency analysis: the class D of sub-exponential (Gamma (G2), Gumbel, Halphen type A (HA), Halphen type B (HB)…) and the class C of regularly varying distributions (Fréchet, Log-Pearson, Halphen type IB …) with a heavier tail. A Decision Support System (DSS) based on the characterization of the right tail, corresponding low probability of excedence p (high return period T=1/p, in hydrology), has been developed. The DSS allows discriminating between the class C and D and in its last version, a new prior step is added in order to test Lognormality. Indeed, the right tail of the Lognormal distribution (LN) is between the tails of distributions of the classes C and D; studies indicated difficulty with the discrimination between LN and distributions of the classes C and D. Other tools are useful to discriminate between distributions of the same class D (HA, HB and G2; see other communication). Some numerical illustrations show that, the DSS allows discriminating between Lognormal, regularly varying and sub-exponential distributions; and lead to coherent conclusions. Key words: Regularly varying distributions, subexponential distributions, Decision Support System, Heavy tailed distribution, Extreme value theory

  12. Regularization of moving boundaries in a laplacian field by a mixed Dirichlet-Neumann boundary condition: exact results.

    PubMed

    Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar

    2005-11-04

    The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.

  13. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  14. A mechanism producing power law etc. distributions

    NASA Astrophysics Data System (ADS)

    Li, Heling; Shen, Hongjun; Yang, Bin

    2017-07-01

    Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.

  15. Power and Roots by Recursion.

    ERIC Educational Resources Information Center

    Aieta, Joseph F.

    1987-01-01

    This article illustrates how questions from elementary finance can serve as motivation for studying high order powers, roots, and exponential functions using Logo procedures. A second discussion addresses a relatively unknown algorithm for the trigonometric exponential and hyperbolic functions. (PK)

  16. Well-posedness of the Prandtl equation with monotonicity in Sobolev spaces

    NASA Astrophysics Data System (ADS)

    Chen, Dongxiang; Wang, Yuxi; Zhang, Zhifei

    2018-05-01

    By using the paralinearization technique, we prove the well-posedness of the Prandtl equation for monotonic data in anisotropic Sobolev space with exponential weight and low regularity. The proof is very elementary, thus is expected to provide a new possible way for the zero-viscosity limit problem of the Navier-Stokes equations with the non-slip boundary condition.

  17. Solutions of differential equations with regular coefficients by the methods of Richmond and Runge-Kutta

    NASA Technical Reports Server (NTRS)

    Cockrell, C. R.

    1989-01-01

    Numerical solutions of the differential equation which describe the electric field within an inhomogeneous layer of permittivity, upon which a perpendicularly-polarized plane wave is incident, are considered. Richmond's method and the Runge-Kutta method are compared for linear and exponential profiles of permittivities. These two approximate solutions are also compared with the exact solutions.

  18. Regular black holes in Einstein-Gauss-Bonnet gravity

    NASA Astrophysics Data System (ADS)

    Ghosh, Sushant G.; Singh, Dharm Veer; Maharaj, Sunil D.

    2018-05-01

    Einstein-Gauss-Bonnet theory, a natural generalization of general relativity to a higher dimension, admits a static spherically symmetric black hole which was obtained by Boulware and Deser. This black hole is similar to its general relativity counterpart with a curvature singularity at r =0 . We present an exact 5D regular black hole metric, with parameter (k >0 ), that interpolates between the Boulware-Deser black hole (k =0 ) and the Wiltshire charged black hole (r ≫k ). Owing to the appearance of the exponential correction factor (e-k /r2), responsible for regularizing the metric, the thermodynamical quantities are modified, and it is demonstrated that the Hawking-Page phase transition is achievable. The heat capacity diverges at a critical radius r =rC, where incidentally the temperature is maximum. Thus, we have a regular black hole with Cauchy and event horizons, and evaporation leads to a thermodynamically stable double-horizon black hole remnant with vanishing temperature. The entropy does not satisfy the usual exact horizon area result of general relativity.

  19. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  20. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  1. On the origin of stretched exponential (Kohlrausch) relaxation kinetics in the room temperature luminescence decay of colloidal quantum dots.

    PubMed

    Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L

    2017-03-21

    The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.

  2. A Simulation of the ECSS Help Desk with the Erlang a Model

    DTIC Science & Technology

    2011-03-01

    a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au

  3. [A modification of the Gompertz plot resulting from the age index by Ries and an approximation of the survivorship curve (author's transl)].

    PubMed

    Lohmann, W

    1978-01-01

    The shape of the survivorship curve can easily be interpreted on condition that the probability of death is proportional to an exponentially rising function of ageing. According to the formation of a sum for determining of the age index by Ries it was investigated to what extent the survivorship curve may be approximated by a sum of exponentials. It follows that the difference between the pure exponential function and a sum of exponentials by using possible values is lying within the random variation. Because the probability of death for different diseases is variable, the new statement is a better one.

  4. Heuristic lipophilicity potential for computer-aided rational drug design: Optimizations of screening functions and parameters

    NASA Astrophysics Data System (ADS)

    Du, Qishi; Mezey, Paul G.

    1998-09-01

    In this research we test and compare three possible atom-basedscreening functions used in the heuristic molecular lipophilicity potential(HMLP). Screening function 1 is a power distance-dependent function, b_{{i}} /| {R_{{i}}- r} |^γ, screening function 2is an exponential distance-dependent function, biexp(-| {R_i- r} |/d_0 , and screening function 3 is aweighted distance-dependent function, {{sign}}( {b_i } ){{exp}}ξ ( {| {R_i- r} |/| {b_i } |} )For every screening function, the parameters (γ ,d0, and ξ are optimized using 41 common organic molecules of 4 types of compounds:aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, andaliphatic alkanes. The results of calculations show that screening function3 cannot give chemically reasonable results, however, both the powerscreening function and the exponential screening function give chemicallysatisfactory results. There are two notable differences between screeningfunctions 1 and 2. First, the exponential screening function has largervalues in the short distance than the power screening function, thereforemore influence from the nearest neighbors is involved using screeningfunction 2 than screening function 1. Second, the power screening functionhas larger values in the long distance than the exponential screeningfunction, therefore screening function 1 is effected by atoms at longdistance more than screening function 2. For screening function 1, thesuitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this researchprovides a potential tool for computer-aided three-dimensional drugdesign.

  5. A Test of the Exponential Distribution for Stand Structure Definition in Uneven-aged Loblolly-Shortleaf Pine Stands

    Treesearch

    Paul A. Murphy; Robert M. Farrar

    1981-01-01

    In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.

  6. Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep.

    PubMed

    Peyrache, Adrien; Dehghani, Nima; Eskandar, Emad N; Madsen, Joseph R; Anderson, William S; Donoghue, Jacob A; Hochberg, Leigh R; Halgren, Eric; Cash, Sydney S; Destexhe, Alain

    2012-01-31

    Intracranial recording is an important diagnostic method routinely used in a number of neurological monitoring scenarios. In recent years, advancements in such recordings have been extended to include unit activity of an ensemble of neurons. However, a detailed functional characterization of excitatory and inhibitory cells has not been attempted in human neocortex, particularly during the sleep state. Here, we report that such feature discrimination is possible from high-density recordings in the neocortex by using 2D multielectrode arrays. Successful separation of regular-spiking neurons (or bursting cells) from fast-spiking cells resulted in well-defined clusters that each showed unique intrinsic firing properties. The high density of the array, which allowed recording from a large number of cells (up to 90), helped us to identify apparent monosynaptic connections, confirming the excitatory and inhibitory nature of regular-spiking and fast-spiking cells, thus categorized as putative pyramidal cells and interneurons, respectively. Finally, we investigated the dynamics of correlations within each class. A marked exponential decay with distance was observed in the case of excitatory but not for inhibitory cells. Although the amplitude of that decline depended on the timescale at which the correlations were computed, the spatial constant did not. Furthermore, this spatial constant is compatible with the typical size of human columnar organization. These findings provide a detailed characterization of neuronal activity, functional connectivity at the microcircuit level, and the interplay of excitation and inhibition in the human neocortex.

  7. On the Gibbs phenomenon 3: Recovering exponential accuracy in a sub-interval from a spectral partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1993-01-01

    The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.

  8. Quantifying patterns of research interest evolution

    NASA Astrophysics Data System (ADS)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Changing and shifting research interest is an integral part of a scientific career. Despite extensive investigations of various factors that influence a scientist's choice of research topics, quantitative assessments of mechanisms that give rise to macroscopic patterns characterizing research interest evolution of individual scientists remain limited. Here we perform a large-scale analysis of extensive publication records, finding that research interest change follows a reproducible pattern characterized by an exponential distribution. We identify three fundamental features responsible for the observed exponential distribution, which arise from a subtle interplay between exploitation and exploration in research interest evolution. We develop a random walk based model, which adequately reproduces our empirical observations. Our study presents one of the first quantitative analyses of macroscopic patterns governing research interest change, documenting a high degree of regularity underlying scientific research and individual careers.

  9. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  10. Initial mass function of planetesimals formed by the streaming instability

    NASA Astrophysics Data System (ADS)

    Schäfer, Urs; Yang, Chao-Chin; Johansen, Anders

    2017-01-01

    The streaming instability is a mechanism to concentrate solid particles into overdense filaments that undergo gravitational collapse and form planetesimals. However, it remains unclear how the initial mass function of these planetesimals depends on the box dimensions of numerical simulations. To resolve this, we perform simulations of planetesimal formation with the largest box dimensions to date, allowing planetesimals to form simultaneously in multiple filaments that can only emerge within such large simulation boxes. In our simulations, planetesimals with sizes between 80 km and several hundred kilometers form. We find that a power law with a rather shallow exponential cutoff at the high-mass end represents the cumulative birth mass function better than an integrated power law. The steepness of the exponential cutoff is largely independent of box dimensions and resolution, while the exponent of the power law is not constrained at the resolutions we employ. Moreover, we find that the characteristic mass scale of the exponential cutoff correlates with the mass budget in each filament. Together with previous studies of high-resolution simulations with small box domains, our results therefore imply that the cumulative birth mass function of planetesimals is consistent with an exponentially tapered power law with a power-law exponent of approximately -1.6 and a steepness of the exponential cutoff in the range of 0.3-0.4.

  11. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  12. Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.

    ERIC Educational Resources Information Center

    Venugopalan, Mundiyath

    1991-01-01

    Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…

  13. A Comparison of Two Algorithms for the Simulation of Non-Homogeneous Poisson Processes with Degree-Two Exponential Polynomial Intensity Function.

    DTIC Science & Technology

    1977-09-01

    process with an event streaa intensity (rate) function that is of degree-two exponential pclyncaial foru. (The use of exponential pclynoaials is...4 \\v 01 ^3 C \\ \\ •r- S_ \\ \\ O \\ \\ a \\ \\ V IA C 4-> \\ \\ •«- c \\ 1 <— 3 • o \\ \\ Ol (J \\ \\ O U —1 <o \\ I...would serve as a good initial approxiaation t* , f-r the Newton-Raphson aethod. However, for the purpose of this implementation, the end point which

  14. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  15. AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjib; Bland-Hawthorn, Joss

    2013-08-20

    An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less

  16. The First Derivative of an Exponential Function with the "White Box/Black Box" Didactical Principle and Observations with GeoGebra

    ERIC Educational Resources Information Center

    Budinski, Natalija; Subramaniam, Stephanie

    2013-01-01

    This paper shows how GeoGebra--a dynamic mathematics software--can be used to experiment, visualize and connect various concepts such as function, first derivative, slope, and tangent line. Students were given an assignment to determine the first derivative of the exponential function that they solved while experimenting with GeoGebra. GeoGebra…

  17. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  18. Brownian motion in time-dependent logarithmic potential: Exact results for dynamics and first-passage properties.

    PubMed

    Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor

    2015-09-21

    The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.

  19. Regular consumption of fresh orange juice increases human skin carotenoid content.

    PubMed

    Massenti, Roberto; Perrone, Anna; Livrea, Maria Antonietta; Lo Bianco, Riccardo

    2015-01-01

    Dermal carotenoids are a good indicator of antioxidant status in the body. This study aimed to determine whether regular consumption of orange juice could increase dermal carotenoids. Two types of orange juice, obtained from regularly (CI) and partially (PRD) irrigated trees, were tested to reveal any possible association between juice and dermal carotenoids. Soluble solids, titratable acidity, and total carotenoids were quantified in the juice; skin carotenoid score (SCS) was assessed by Raman spectroscopy. Carotenoid content was 7.3% higher in PRD than in CI juice, inducing no difference in SCS. In a first trial with daily juice intakes for 25 days, SCS increased linearly (10%) in the individual with higher initial SCS, and exponentially (15%) in the individual with lower initial SCS. In a second trial, SCS showed a 6.5% increase after 18 days of drinking juice every other day, but returned to initial values three days after last intake. Skin carotenoids can be increased by regular consumption of fresh orange juice, while their persistence may depend on the accumulation level, environmental conditions or living habits.

  20. M. Riesz-Schur-type inequalities for entire functions of exponential type

    NASA Astrophysics Data System (ADS)

    Ganzburg, M. I.; Nevai, P.; Erdélyi, T.

    2015-01-01

    We prove a general M. Riesz-Schur-type inequality for entire functions of exponential type. If f and Q are two functions of exponential types σ > 0 and τ ≥ 0, respectively, and if Q is real-valued and the real zeros of Q, not counting multiplicities, are bounded away from each other, then \\displaystyle \\vert f(x)\\vert≤ (σ+τ) (Aσ+τ(Q))-1/2\\Vert Q f\\Vert C( R),\\qquad x\\in R, where \\displaystyle A_s(Q) \\stackrel{{def}}{=}\\infx\\in R \\bigl( \\lbrack Q'(x) \\rbrack ^2+s2 [Q(x)]^2\\bigr). We apply this inequality to the weights Q(x)\\stackrel{{def}}{=} \\sin (τ x) and Q(x) \\stackrel{{def}}{=} x and describe the extremal functions in the corresponding inequalities. Bibliography: 7 titles.

  1. Exponential Correlation of IQ and the Wealth of Nations

    ERIC Educational Resources Information Center

    Dickerson, Richard E.

    2006-01-01

    Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…

  2. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  3. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  4. On the Gibbs phenomenon 4: Recovering exponential accuracy in a sub-interval from a Gegenbauer partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.

  5. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  6. Robust Object Tracking with a Hierarchical Ensemble Framework

    DTIC Science & Technology

    2016-10-09

    layer; 4 -update the top layer; 5-re-extract the sub-patches and update their weights in the middle layer; 6-update the parameters of weak classifiers...approaches [ 4 ], [5], which represent the target with a limited number of non-overlapping or regular local regions. So they may not cope well with the large...significant- ly reduce the feature dimensions so that our approach can handle colorful images without suffering from exponential memory explosion; 4

  7. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  8. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  9. Dynamic heterogeneity and conditional statistics of non-Gaussian temperature fluctuations in turbulent thermal convection

    NASA Astrophysics Data System (ADS)

    He, Xiaozhou; Wang, Yin; Tong, Penger

    2018-05-01

    Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.

  10. Static versus Dynamic Disposition: The Role of GeoGebra in Representing Polynomial-Rational Inequalities and Exponential-Logarithmic Functions

    ERIC Educational Resources Information Center

    Caglayan, Günhan

    2014-01-01

    This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…

  11. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  12. New exponential stability criteria for stochastic BAM neural networks with impulses

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Samidurai, R.; Anthoni, S. M.

    2010-10-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  13. A Simulation To Model Exponential Growth.

    ERIC Educational Resources Information Center

    Appelbaum, Elizabeth Berman

    2000-01-01

    Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)

  14. Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routine

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Endreny, Theodore A.; Hassett, James M.

    2006-11-01

    TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4.8 cm h-1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright

  15. Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warshaw, S I

    2001-07-15

    In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less

  16. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  17. Exponentially accurate approximations to piece-wise smooth periodic functions

    NASA Technical Reports Server (NTRS)

    Greer, James; Banerjee, Saheb

    1995-01-01

    A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.

  18. Possible stretched exponential parametrization for humidity absorption in polymers.

    PubMed

    Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O

    2009-04-01

    Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.

  19. Stickiness in Hamiltonian systems: From sharply divided to hierarchical phase space

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; Motter, Adilson E.; Kantz, Holger

    2006-02-01

    We investigate the dynamics of chaotic trajectories in simple yet physically important Hamiltonian systems with nonhierarchical borders between regular and chaotic regions with positive measures. We show that the stickiness to the border of the regular regions in systems with such a sharply divided phase space occurs through one-parameter families of marginally unstable periodic orbits and is characterized by an exponent γ=2 for the asymptotic power-law decay of the distribution of recurrence times. Generic perturbations lead to systems with hierarchical phase space, where the stickiness is apparently enhanced due to the presence of infinitely many regular islands and Cantori. In this case, we show that the distribution of recurrence times can be composed of a sum of exponentials or a sum of power laws, depending on the relative contribution of the primary and secondary structures of the hierarchy. Numerical verification of our main results are provided for area-preserving maps, mushroom billiards, and the newly defined magnetic mushroom billiards.

  20. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    PubMed

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  1. Dynamic topography and gravity anomalies for fluid layers whose viscosity varies exponentially with depth

    NASA Technical Reports Server (NTRS)

    Revenaugh, Justin; Parsons, Barry

    1987-01-01

    Adopting the formalism of Parsons and Daly (1983), analytical integral equations (Green's function integrals) are derived which relate gravity anomalies and dynamic boundary topography with temperature as a function of wavenumber for a fluid layer whose viscosity varies exponentially with depth. In the earth, such a viscosity profile may be found in the asthenosphere, where the large thermal gradient leads to exponential decrease of viscosity with depth, the effects of a pressure increase being small in comparison. It is shown that, when viscosity varies rapidly, topography kernels for both the surface and bottom boundaries (and hence the gravity kernel) are strongly affected at all wavelengths.

  2. Exponential approximation for daily average solar heating or photolysis. [of stratospheric ozone layer

    NASA Technical Reports Server (NTRS)

    Cogley, A. C.; Borucki, W. J.

    1976-01-01

    When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.

  3. A generalized exponential link function to map a conflict indicator into severity index within safety continuum framework.

    PubMed

    Zheng, Lai; Ismail, Karim

    2017-05-01

    Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Initial-boundary value problem to 2D Boussinesq equations for MHD convection with stratification effects

    NASA Astrophysics Data System (ADS)

    Bian, Dongfen; Liu, Jitao

    2017-12-01

    This paper is concerned with the initial-boundary value problem to 2D magnetohydrodynamics-Boussinesq system with the temperature-dependent viscosity, thermal diffusivity and electrical conductivity. First, we establish the global weak solutions under the minimal initial assumption. Then by imposing higher regularity assumption on the initial data, we obtain the global strong solution with uniqueness. Moreover, the exponential decay rates of weak solutions and strong solution are obtained respectively.

  5. Robust Image Regression Based on the Extended Matrix Variate Power Exponential Distribution of Dependent Noise.

    PubMed

    Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu

    2017-09-01

    Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.

  6. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  7. Heuristic lipophilicity potential for computer-aided rational drug design: optimizations of screening functions and parameters.

    PubMed

    Du, Q; Mezey, P G

    1998-09-01

    In this research we test and compare three possible atom-based screening functions used in the heuristic molecular lipophilicity potential (HMLP). Screening function 1 is a power distance-dependent function, bi/[formula: see text] Ri-r [formula: see text] gamma, screening function 2 is an exponential distance-dependent function, bi exp(-[formula: see text] Ri-r [formula: see text]/d0), and screening function 3 is a weighted distance-dependent function, sign(bi) exp[-xi [formula: see text] Ri-r [formula: see text]/magnitude of bi)]. For every screening function, the parameters (gamma, d0, and xi) are optimized using 41 common organic molecules of 4 types of compounds: aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, and aliphatic alkanes. The results of calculations show that screening function 3 cannot give chemically reasonable results, however, both the power screening function and the exponential screening function give chemically satisfactory results. There are two notable differences between screening functions 1 and 2. First, the exponential screening function has larger values in the short distance than the power screening function, therefore more influence from the nearest neighbors is involved using screening function 2 than screening function 1. Second, the power screening function has larger values in the long distance than the exponential screening function, therefore screening function 1 is effected by atoms at long distance more than screening function 2. For screening function 1, the suitable range of parameter gamma is 1.0 < gamma < 3.0, gamma = 2.3 is recommended, and gamma = 2.0 is the nearest integral value. For screening function 2, the suitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this research provides a potential tool for computer-aided three-dimensional drug design.

  8. A fully Galerkin method for the recovery of stiffness and damping parameters in Euler-Bernoulli beam models

    NASA Technical Reports Server (NTRS)

    Smith, R. C.; Bowers, K. L.

    1991-01-01

    A fully Sinc-Galerkin method for recovering the spatially varying stiffness and damping parameters in Euler-Bernoulli beam models is presented. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which converges exponentially and is valid on the infinite time interval. Hence the method avoids the time-stepping which is characteristic of many of the forward schemes which are used in parameter recovery algorithms. Tikhonov regularization is used to stabilize the resulting inverse problem, and the L-curve method for determining an appropriate value of the regularization parameter is briefly discussed. Numerical examples are given which demonstrate the applicability of the method for both individual and simultaneous recovery of the material parameters.

  9. Interaction phenomenon to dimensionally reduced p-gBKP equation

    NASA Astrophysics Data System (ADS)

    Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing

    2018-02-01

    Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.

  10. Coupled-cluster Green's function: Analysis of properties originating in the exponential parametrization of the ground-state wave function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    In this paper we derive basic properties of the Green’s function matrix elements stemming from the exponential coupled cluster (CC) parametrization of the ground-state wave function. We demon- strate that all intermediates used to express retarded (or equivalently, ionized) part of the Green’s function in the ω-representation can be expressed through connected diagrams only. Similar proper- ties are also shared by the first order ω-derivatives of the retarded part of the CC Green’s function. This property can be extended to any order ω-derivatives of the Green’s function. Through the Dyson equation of CC Green’s function, the derivatives of corresponding CCmore » self-energy can be evaluated analytically. In analogy to the CC Green’s function, the corresponding CC self-energy is expressed in terms of connected diagrams only. Moreover, the ionized part of the CC Green’s func- tion satisfies the non-homogeneous linear system of ordinary differential equations, whose solution may be represented in the exponential form. Our analysis can be easily generalized to the advanced part of the CC Green’s function.« less

  11. Deadline rush: a time management phenomenon and its mathematical description.

    PubMed

    König, Cornelius J; Kleinmann, Martin

    2005-01-01

    A typical time management phenomenon is the rush before a deadline. Behavioral decision making research can be used to predict how behavior changes before a deadline. People are likely not to work on a project with a deadline in the far future because they generally discount future outcomes. Only when the deadline is close are people likely to work. On the basis of recent intertemporal choice experiments, the authors argue that a hyperbolic function should provide a more accurate description of the deadline rush than an exponential function predicted by an economic model of discounted utility. To show this, the fit of the hyperbolic and the exponential function were compared with data sets that describe when students study for exams. As predicted, the hyperbolic function fit the data significantly better than the exponential function. The implication for time management decisions is that they are most likely to be inconsistent over time (i.e., people make a plan how to use their time but do not follow it).

  12. Using Solution Strategies to Examine and Promote High-School Students' Understanding of Exponential Functions: One Teacher's Attempt

    ERIC Educational Resources Information Center

    Brendefur, Jonathan

    2014-01-01

    Much research has been conducted on how elementary students develop mathematical understanding and subsequently how teachers might use this information. This article builds on this type of work by investigating how one high-school algebra teacher designs and conducts a lesson on exponential functions. Through a lesson study format she studies with…

  13. A Spectral Lyapunov Function for Exponentially Stable LTV Systems

    NASA Technical Reports Server (NTRS)

    Zhu, J. Jim; Liu, Yong; Hang, Rui

    2010-01-01

    This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.

  14. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  15. Exponential stability of impulsive stochastic genetic regulatory networks with time-varying delays and reaction-diffusion

    DOE PAGES

    Cao, Boqiang; Zhang, Qimin; Ye, Ming

    2016-11-29

    We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.

  16. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE PAGES

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...

    2018-01-22

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  17. OMFIT Tokamak Profile Data Fitting and Physics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, N. C.; Grierson, B. A.; Haskey, S. R.

    Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less

  18. Exponential model normalization for electrical capacitance tomography with external electrodes under gap permittivity conditions

    NASA Astrophysics Data System (ADS)

    Baidillah, Marlin R.; Takei, Masahiro

    2017-06-01

    A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.

  19. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  20. The Use of Modeling Approach for Teaching Exponential Functions

    NASA Astrophysics Data System (ADS)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  1. Decay of random correlation functions for unimodal maps

    NASA Astrophysics Data System (ADS)

    Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique

    2000-10-01

    Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.

  2. Effects of aging in catastrophe on the steady state and dynamics of a microtubule population

    NASA Astrophysics Data System (ADS)

    Jemseena, V.; Gopalakrishnan, Manoj

    2015-05-01

    Several independent observations have suggested that the catastrophe transition in microtubules is not a first-order process, as is usually assumed. Recent in vitro observations by Gardner et al. [M. K. Gardner et al., Cell 147, 1092 (2011), 10.1016/j.cell.2011.10.037] showed that microtubule catastrophe takes place via multiple steps and the frequency increases with the age of the filament. Here we investigate, via numerical simulations and mathematical calculations, some of the consequences of the age dependence of catastrophe on the dynamics of microtubules as a function of the aging rate, for two different models of aging: exponential growth, but saturating asymptotically, and purely linear growth. The boundary demarcating the steady-state and non-steady-state regimes in the dynamics is derived analytically in both cases. Numerical simulations, supported by analytical calculations in the linear model, show that aging leads to nonexponential length distributions in steady state. More importantly, oscillations ensue in microtubule length and velocity. The regularity of oscillations, as characterized by the negative dip in the autocorrelation function, is reduced by increasing the frequency of rescue events. Our study shows that the age dependence of catastrophe could function as an intrinsic mechanism to generate oscillatory dynamics in a microtubule population, distinct from hitherto identified ones.

  3. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  4. Quantum localization for a kicked rotor with accelerator mode islands.

    PubMed

    Iomin, A; Fishman, S; Zaslavsky, G M

    2002-03-01

    Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.

  5. A quantitative description of normal AV nodal conduction curve in man.

    PubMed

    Teague, S; Collins, S; Wu, D; Denes, P; Rosen, K; Arzbaecher, R

    1976-01-01

    The AV nodal conduction curve generated by the atrial extrastimulus technique has been described only qualitatively in man, making clinical comparison of known normal curves with those of suspected AV nodal dysfunction difficult. Also, the effects of physiological and pharmacological interventions have not been quantifiable. In 50 patients with normal AV conduction as defined by normal AH (less than 130 ms), normal AV nodal effective and functional refractory periods (less than 380 and less than 500 ms), and absence of demonstrable dual AV nodal pathways, we found that conduction curves (at sinus rhythm or longest paced cycle length) can be described by an exponential equation of the form delta = Ae-Bx. In this equation, delta is the increase in AV nodal conduction time of an extrastimulus compared to that of a regular beat and x is extrastimulus interval. The natural logarithm of this equation is linear in the semilogarithmic plane, thus permitting the constants A and B to be easily determined by a least-squares regression analysis with a hand calculator.

  6. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  7. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  8. A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints.

    PubMed

    Liang, X B; Wang, J

    2000-01-01

    This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.

  9. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  10. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  11. New results on global exponential dissipativity analysis of memristive inertial neural networks with distributed time-varying delays.

    PubMed

    Zhang, Guodong; Zeng, Zhigang; Hu, Junhao

    2018-01-01

    This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    PubMed Central

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  13. Anomalous NMR relaxation in cartilage matrix components and native cartilage: Fractional-order models

    NASA Astrophysics Data System (ADS)

    Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-06-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.

  14. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  15. Calculating Formulae of Proportion Factor and Mean Neutron Exposure in the Exponential Expression of Neutron Exposure Distribution

    NASA Astrophysics Data System (ADS)

    Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang

    2016-07-01

    Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.

  16. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  17. Learning curves in highly skilled chess players: a test of the generality of the power law of practice.

    PubMed

    Howard, Robert W

    2014-09-01

    The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Discrete-time BAM neural networks with variable delays

    NASA Astrophysics Data System (ADS)

    Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi

    2007-07-01

    This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.

  19. New results for global exponential synchronization in neural networks via functional differential inclusions.

    PubMed

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  20. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  1. Multistability of second-order competitive neural networks with nondecreasing saturated activation functions.

    PubMed

    Nie, Xiaobing; Cao, Jinde

    2011-11-01

    In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.

  2. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  3. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-01

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  4. Matrix elements of N-particle explicitly correlated Gaussian basis functions with complex exponential parameters.

    PubMed

    Bubin, Sergiy; Adamowicz, Ludwik

    2006-06-14

    In this work we present analytical expressions for Hamiltonian matrix elements with spherically symmetric, explicitly correlated Gaussian basis functions with complex exponential parameters for an arbitrary number of particles. The expressions are derived using the formalism of matrix differential calculus. In addition, we present expressions for the energy gradient that includes derivatives of the Hamiltonian integrals with respect to the exponential parameters. The gradient is used in the variational optimization of the parameters. All the expressions are presented in the matrix form suitable for both numerical implementation and theoretical analysis. The energy and gradient formulas have been programmed and used to calculate ground and excited states of the He atom using an approach that does not involve the Born-Oppenheimer approximation.

  5. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  6. BORAX V EXPONENTIAL EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirn, F.S.; Hagen, J.I.

    1963-04-01

    The cadmium ratio was measured in an exponential mockup of Borax V as a function of the void fraction. The extent of voids, simulated by lengths of closed polyethylene tubes, ranged from 0 to 40%. The corresponding cadmium ratios ranged from 6.1 to 4.6. The exponential was also used to determine the radial flux pattern across a Borax-type fuel assembly and the fine flux detail in and around fuel rods. For a normal loading the maximum-to-average power generation across an assembly was 1.24. (auth)

  7. Computer Diagnostics.

    ERIC Educational Resources Information Center

    Tondow, Murray

    The report deals with the influence of computer technology on education, particularly guidance. The need for computers is a result of increasing complexity which is defined as: (1) an exponential increase of information; (2) an exponential increase in dissemination capabilities; and (3) an accelerating curve of change. Listed are five functions of…

  8. Bacterial genomes lacking long-range correlations may not be modeled by low-order Markov chains: the role of mixing statistics and frame shift of neighboring genes.

    PubMed

    Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian

    2014-12-01

    We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas

    PubMed Central

    Philibert, Aurore; Loyce, Chantal; Makowski, David

    2012-01-01

    Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430

  10. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  11. Crack identification for reinforced concrete using PZT based smart rebar active sensing diagnostic network

    NASA Astrophysics Data System (ADS)

    Song, N. N.; Wu, F.

    2016-04-01

    An active sensing diagnostic system using PZT based smart rebar for SHM of RC structure has been currently under investigation. Previous test results showed that the system could detect the de-bond of concrete from reinforcement, and the diagnostic signals were increased exponentially with the de-bonding size. Previous study also showed that the smart rebar could function well like regular reinforcement to undertake tension stresses. In this study, a smart rebar network has been used to detect the crack damage of concrete based on guided waves. Experimental test has been carried out for the study. In the test, concrete beams with 2 reinforcements have been built. 8 sets of PZT elements were mounted onto the reinforcement bars in an optimized way to form an active sensing diagnostic system. A 90 kHz 5-cycle Hanning-windowed tone burst was used as input. Multiple cracks have been generated on the concrete structures. Through the guided bulk waves propagating in the structures from actuators and sensors mounted from different bars, crack damage could be detected clearly. Cases for both single and multiple cracks were tested. Different crack depths from the surface and different crack numbers have been studied. Test result shows that the amplitude of sensor output signals is deceased linearly with a propagating crack, and is decreased exponentially with increased crack numbers. From the study, the active sensing diagnostic system using PZT based smart rebar network shows a promising way to provide concrete crack damage information through the "talk" among sensors.

  12. Lump solutions and interaction phenomenon to the third-order nonlinear evolution equation

    NASA Astrophysics Data System (ADS)

    Kofane, T. C.; Fokou, M.; Mohamadou, A.; Yomba, E.

    2017-11-01

    In this work, the lump solution and the kink solitary wave solution from the (2 + 1) -dimensional third-order evolution equation, using the Hirota bilinear method are obtained through symbolic computation with Maple. We have assumed that the lump solution is centered at the origin, when t = 0 . By considering a mixing positive quadratic function with exponential function, as well as a mixing positive quadratic function with hyperbolic cosine function, interaction solutions like lump-exponential and lump-hyperbolic cosine are presented. A completely non-elastic interaction between a lump and kink soliton is observed, showing that a lump solution can be swallowed by a kink soliton.

  13. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  14. Regularity of p(ṡ)-superharmonic functions, the Kellogg property and semiregular boundary points

    NASA Astrophysics Data System (ADS)

    Adamowicz, Tomasz; Björn, Anders; Björn, Jana

    2014-11-01

    We study various boundary and inner regularity questions for $p(\\cdot)$-(super)harmonic functions in Euclidean domains. In particular, we prove the Kellogg property and introduce a classification of boundary points for $p(\\cdot)$-harmonic functions into three disjoint classes: regular, semiregular and strongly irregular points. Regular and especially semiregular points are characterized in many ways. The discussion is illustrated by examples. Along the way, we present a removability result for bounded $p(\\cdot)$-harmonic functions and give some new characterizations of $W^{1, p(\\cdot)}_0$ spaces. We also show that $p(\\cdot)$-superharmonic functions are lower semicontinuously regularized, and characterize them in terms of lower semicontinuously regularized supersolutions.

  15. Simplifying the Mathematical Treatment of Radioactive Decay

    ERIC Educational Resources Information Center

    Auty, Geoff

    2011-01-01

    Derivation of the law of radioactive decay is considered without prior knowledge of calculus or the exponential series. Calculus notation and exponential functions are used because ultimately they cannot be avoided, but they are introduced in a simple way and explained as needed. (Contains 10 figures, 1 box, and 1 table.)

  16. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  17. Alpha models for rotating Navier-Stokes equations in geophysics with nonlinear dispersive regularization

    NASA Astrophysics Data System (ADS)

    Kim, Bong-Sik

    Three dimensional (3D) Navier-Stokes-alpha equations are considered for uniformly rotating geophysical fluid flows (large Coriolis parameter f = 2O). The Navier-Stokes-alpha equations are a nonlinear dispersive regularization of usual Navier-Stokes equations obtained by Lagrangian averaging. The focus is on the existence and global regularity of solutions of the 3D rotating Navier-Stokes-alpha equations and the uniform convergence of these solutions to those of the original 3D rotating Navier-Stokes equations for large Coriolis parameters f as alpha → 0. Methods are based on fast singular oscillating limits and results are obtained for periodic boundary conditions for all domain aspect ratios, including the case of three wave resonances which yields nonlinear "2½-dimensional" limit resonant equations for f → 0. The existence and global regularity of solutions of limit resonant equations is established, uniformly in alpha. Bootstrapping from global regularity of the limit equations, the existence of a regular solution of the full 3D rotating Navier-Stokes-alpha equations for large f for an infinite time is established. Then, the uniform convergence of a regular solution of the 3D rotating Navier-Stokes-alpha equations (alpha ≠ 0) to the one of the original 3D rotating NavierStokes equations (alpha = 0) for f large but fixed as alpha → 0 follows; this implies "shadowing" of trajectories of the limit dynamical systems by those of the perturbed alpha-dynamical systems. All the estimates are uniform in alpha, in contrast with previous estimates in the literature which blow up as alpha → 0. Finally, the existence of global attractors as well as exponential attractors is established for large f and the estimates are uniform in alpha.

  18. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  19. Twistor interpretation of slice regular functions

    NASA Astrophysics Data System (ADS)

    Altavilla, Amedeo

    2018-01-01

    Given a slice regular function f : Ω ⊂ H → H, with Ω ∩ R ≠ ∅, it is possible to lift it to surfaces in the twistor space CP3 of S4 ≃ H ∪ { ∞ } (see Gentili et al., 2014). In this paper we show that the same result is true if one removes the hypothesis Ω ∩ R ≠ ∅ on the domain of the function f. Moreover we find that if a surface S ⊂CP3 contains the image of the twistor lift of a slice regular function, then S has to be ruled by lines. Starting from these results we find all the projective classes of algebraic surfaces up to degree 3 in CP3 that contain the lift of a slice regular function. In addition we extend and further explore the so-called twistor transform, that is a curve in Gr2(C4) which, given a slice regular function, returns the arrangement of lines whose lift carries on. With the explicit expression of the twistor lift and of the twistor transform of a slice regular function we exhibit the set of slice regular functions whose twistor transform describes a rational line inside Gr2(C4) , showing the role of slice regular functions not defined on R. At the end we study the twistor lift of a particular slice regular function not defined over the reals. This example shows the effectiveness of our approach and opens some questions.

  20. How exponential are FREDs?

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.; Dyson, Samuel E.

    1996-08-01

    A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.

  1. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  2. The Translated Dowling Polynomials and Numbers.

    PubMed

    Mangontarum, Mahid M; Macodi-Ringia, Amila P; Abdulcarim, Normalah S

    2014-01-01

    More properties for the translated Whitney numbers of the second kind such as horizontal generating function, explicit formula, and exponential generating function are proposed. Using the translated Whitney numbers of the second kind, we will define the translated Dowling polynomials and numbers. Basic properties such as exponential generating functions and explicit formula for the translated Dowling polynomials and numbers are obtained. Convexity, integral representation, and other interesting identities are also investigated and presented. We show that the properties obtained are generalizations of some of the known results involving the classical Bell polynomials and numbers. Lastly, we established the Hankel transform of the translated Dowling numbers.

  3. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    PubMed

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for unevenly distributed soil PTE data in mining areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Seagoing Box Scores and Seakeeping Criteria for Monohull, SWATH, Planing, Hydrofoil, Surface Effect Ships, and Air Cushion Vehicles

    DTIC Science & Technology

    1979-03-01

    g DAVID W. TAYLOR NAVAL SHIP SRESEARCH AND DEVELOPMENT CENTERIBethesda, Md. 20064 S C SEAGOING BOX SCORES AND SEAKEEPING CRITERIA FOR MONOHULL, SWATH...Iteria for SWATIH for the Trans it Alone or the ’ratwiit I’ltu Sonai Search Fti’lux. on . . . 5.1 A,.3 - Govorit nR Cr ier ia for Monolu I Is fti the...L V NOTATION A Nondimensional coefficients a Regular wave amplitude B Ship beam e Exponential e - 2.7183 g Gravity acceleration Hz Hertz, unit of

  5. On new non-modal hydrodynamic stability modes and resulting non-exponential growth rates - a Lie symmetry approach

    NASA Astrophysics Data System (ADS)

    Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan

    2016-11-01

    Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.

  6. Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.

    PubMed

    Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M

    2017-05-16

    Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.

  7. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  8. Efficient field-theoretic simulation of polymer solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106

    2014-12-14

    We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less

  9. Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.

    2018-03-01

    We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.

  10. Asymptotic analysis of quasilinear parabolic-hyperbolic equations describing the large longitudinal motion of a light viscoelastic bar with a heavy attachment

    NASA Astrophysics Data System (ADS)

    Yip, Shui Cheung

    We study the longitudinal motion of a nonlinearly viscoelastic bar with one end fixed and the other end attached to a heavy tip mass. This problem is a precise continuum mechanical analog of the basic discrete mechanical problem of the motion of a mass point on a (massless) spring. This motion is governed by an initial-boundary-value problem for a class of third-order quasilinear parabolic-hyperbolic partial differential equations subject to a nonstandard boundary condition, which is the equation of motion of the tip mass. The ratio of the mass of the bar to that of the tip mass is taken to be a small parameter varepsilon. We prove that this problem has a unique regular solution that admits a valid asymptotic expansion, including an initial-layer expansion, in powers of varepsilon for varepsilon near 0. The fundamental constitutive hypothesis that the tension be a uniformly monotone function of the strain rate plays a critical role in a delicate proof that each term of the initial layer expansion decays exponentially in time. These results depend on new decay estimates for the solution of quasilinear parabolic equations. The constitutive hypothesis that the viscosity become large where the bar nears total compression leads to important uniform bounds for the strain and the strain rate. Higher-order energy estimates support the proof by the Schauder Fixed-Point Theorem of the existence of solutions having a level of regularity appropriate for the asymptotics.

  11. State of charge modeling of lithium-ion batteries using dual exponential functions

    NASA Astrophysics Data System (ADS)

    Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De

    2016-05-01

    A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.

  12. A multi-pixel InSAR time series analysis method: Simultaneous estimation of atmospheric noise, orbital errors and deformation

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2016-12-01

    InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.

  13. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  14. Necessary conditions for weighted mean convergence of Lagrange interpolation for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.; Kwon, K. H.

    2001-07-01

    Given a continuous real-valued function f which vanishes outside a fixed finite interval, we establish necessary conditions for weighted mean convergence of Lagrange interpolation for a general class of even weights w which are of exponential decay on the real line or at the endpoints of (-1,1).

  15. Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.

    ERIC Educational Resources Information Center

    Mandell, Marvin B.; Bretschneider, Stuart I.

    1984-01-01

    The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)

  16. Looking for Connections between Linear and Exponential Functions

    ERIC Educational Resources Information Center

    Lo, Jane-Jane; Kratky, James L.

    2012-01-01

    Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…

  17. Studies in Dialogue and Discourse: An Exponential Law of Successive Questioning

    ERIC Educational Resources Information Center

    Mishler, Elliot G.

    1975-01-01

    The structure of natural conversations in first-grade classrooms is the focus of this inquiry. Analyses of a particular type of discourse, namely, connected conversations initiated and sustained by questioning, suggest that the probability that a conversation will be continued may be expressed as a simple exponential function. (Author/RM)

  18. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  19. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  20. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  1. Role of exponential type random invexities for asymptotically sufficient efficiency conditions in semi-infinite multi-objective fractional programming.

    PubMed

    Verma, Ram U; Seol, Youngsoo

    2016-01-01

    First a new notion of the random exponential Hanson-Antczak type [Formula: see text]-V-invexity is introduced, which generalizes most of the existing notions in the literature, second a random function [Formula: see text] of the second order is defined, and finally a class of asymptotically sufficient efficiency conditions in semi-infinite multi-objective fractional programming is established. Furthermore, several sets of asymptotic sufficiency results in which various generalized exponential type [Formula: see text]-V-invexity assumptions are imposed on certain vector functions whose components are the individual as well as some combinations of the problem functions are examined and proved. To the best of our knowledge, all the established results on the semi-infinite aspects of the multi-objective fractional programming are new, which is a significantly new emerging field of the interdisciplinary research in nature. We also observed that the investigated results can be modified and applied to several special classes of nonlinear programming problems.

  2. On E-discretization of tori of compact simple Lie groups. II

    NASA Astrophysics Data System (ADS)

    Hrivnák, Jiří; Juránek, Michal

    2017-10-01

    Ten types of discrete Fourier transforms of Weyl orbit functions are developed. Generalizing one-dimensional cosine, sine, and exponential, each type of the Weyl orbit function represents an exponential symmetrized with respect to a subgroup of the Weyl group. Fundamental domains of even affine and dual even affine Weyl groups, governing the argument and label symmetries of the even orbit functions, are determined. The discrete orthogonality relations are formulated on finite sets of points from the refinements of the dual weight lattices. Explicit counting formulas for the number of points of the discrete transforms are deduced. Real-valued Hartley orbit functions are introduced, and all ten types of the corresponding discrete Hartley transforms are detailed.

  3. Reduced Heme Levels Underlie the Exponential Growth Defect of the Shewanella oneidensis hfq Mutant

    PubMed Central

    Mezoian, Taylor; Hunt, Taylor M.; Keane, Meaghan L.; Leonard, Jessica N.; Scola, Shelby E.; Beer, Emma N.; Perdue, Sarah; Pellock, Brett J.

    2014-01-01

    The RNA chaperone Hfq fulfills important roles in small regulatory RNA (sRNA) function in many bacteria. Loss of Hfq in the dissimilatory metal reducing bacterium Shewanella oneidensis strain MR-1 results in slow exponential phase growth and a reduced terminal cell density at stationary phase. We have found that the exponential phase growth defect of the hfq mutant in LB is the result of reduced heme levels. Both heme levels and exponential phase growth of the hfq mutant can be completely restored by supplementing LB medium with 5-aminolevulinic acid (5-ALA), the first committed intermediate synthesized during heme synthesis. Increasing expression of gtrA, which encodes the enzyme that catalyzes the first step in heme biosynthesis, also restores heme levels and exponential phase growth of the hfq mutant. Taken together, our data indicate that reduced heme levels are responsible for the exponential growth defect of the S. oneidensis hfq mutant in LB medium and suggest that the S. oneidensis hfq mutant is deficient in heme production at the 5-ALA synthesis step. PMID:25356668

  4. VO2 Off Transient Kinetics in Extreme Intensity Swimming.

    PubMed

    Sousa, Ana; Figueiredo, Pedro; Keskinen, Kari L; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, João P; Fernandes, Ricardo J

    2011-01-01

    Inconsistencies about dynamic asymmetry between the on- and off- transient responses in oxygen uptake are found in the literature. Therefore, the purpose of this study was to characterize the oxygen uptake off-transient kinetics during a maximal 200-m front crawl effort, as examining the degree to which the on/off regularity of the oxygen uptake kinetics response was preserved. Eight high level male swimmers performed a 200-m front crawl at maximal speed during which oxygen uptake was directly measured through breath-by-breath oxymetry (averaged every 5 s). This apparatus was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. The on- and off-transient phases were symmetrical in shape (mirror image) once they were adequately fitted by a single-exponential regression models, and no slow component for the oxygen uptake response was developed. Mean (± SD) peak oxygen uptake was 69.0 (± 6.3) mL·kg(-1)·min(-1), significantly correlated with time constant of the off- transient period (r = 0.76, p < 0.05) but not with any of the other oxygen off-transient kinetic parameters studied. A direct relationship between time constant of the off-transient period and mean swimming speed of the 200-m (r = 0.77, p < 0.05), and with the amplitude of the fast component of the effort period (r = 0.72, p < 0.05) were observed. The mean amplitude and time constant of the off-transient period values were significantly greater than the respective on- transient. In conclusion, although an asymmetry between the on- and off kinetic parameters was verified, both the 200-m effort and the respectively recovery period were better characterized by a single exponential regression model. Key pointsThe VO2 slow component was not observed in the recovery period of swimming extreme efforts;The on and off transient periods were better fitted by a single exponential function, and so, these effort and recovery periods of swimming extreme efforts are symmetrical;The rate of VO2 decline during the recovery period may be due to not only the magnitude of oxygen debt but also the VO2peak obtained during the effort period.

  5. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  6. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  7. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  8. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  9. Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.

    2001-01-01

    The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.

  10. Basis convergence of range-separated density-functional theory.

    PubMed

    Franck, Odile; Mussard, Bastien; Luppi, Eleonora; Toulouse, Julien

    2015-02-21

    Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. We study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N2, and H2O) with cardinal number X of the Dunning basis sets cc - p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.

  11. The Development of a High Speed Exponential Function Generator for Linearization of Microwave Voltage Controlled Oscillators.

    DTIC Science & Technology

    1985-10-01

    characteristic of a p-n junction to provide exponential linearization in a simple, thermally-stable, wide band circuit. RESME Les oscillateurs A...exponentielle (fr6quence/tension) que V’on 1 retrouve chez plusieurs oscillateurs . Ce circuit, d’une grande largeur de bande, utilise la caractfiristique

  12. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  13. Estimates of projection overlap and zones of convergence within frontal-striatal circuits.

    PubMed

    Averbeck, Bruno B; Lehman, Julia; Jacobson, Moriah; Haber, Suzanne N

    2014-07-16

    Frontal-striatal circuits underlie important decision processes, and pathology in these circuits is implicated in many psychiatric disorders. Studies have shown a topographic organization of cortical projections into the striatum. However, work has also shown that there is considerable overlap in the striatal projection zones of nearby cortical regions. To characterize this in detail, we quantified the complete striatal projection zones from 34 cortical injection locations in rhesus monkeys. We first fit a statistical model that showed that the projection zone of a cortical injection site could be predicted with considerable accuracy using a cross-validated model estimated on only the other injection sites. We then examined the fraction of overlap in striatal projection zones as a function of distance between cortical injection sites, and found that there was a highly regular relationship. Specifically, nearby cortical locations had as much as 80% overlap, and the amount of overlap decayed exponentially as a function of distance between the cortical injection sites. Finally, we found that some portions of the striatum received inputs from all the prefrontal regions, making these striatal zones candidates as information-processing hubs. Thus, the striatum is a site of convergence that allows integration of information spread across diverse prefrontal cortical areas. Copyright © 2014 the authors 0270-6474/14/339497-09$15.00/0.

  14. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  15. Model of flare lightcurve profile observed in soft X-rays

    NASA Astrophysics Data System (ADS)

    Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz

    We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.

  16. Modeling the growth processes of polyelectrolyte multilayers using a quartz crystal resonator.

    PubMed

    Salomäki, Mikko; Kankare, Jouko

    2007-07-26

    The layer-by-layer buildup of chitosan/hyaluronan (CH/HA) and poly(l-lysine)/hyaluronan (PLL/HA) multilayers was followed on a quartz crystal resonator (QCR) in different ionic strengths and at different temperatures. These polyelectrolytes were chosen to demonstrate the method whereby useful information is retrieved from acoustically thick polymer layers during their buildup. Surface acoustic impedance recorded in these measurements gives a single or double spiral when plotted in the complex plane. The shape of this spiral depends on the viscoelasticity of the layer material and regularity of the growth process. The polymer layer is assumed to consist of one or two zones. A mathematical model was devised to represent the separation of the layer to two zones with different viscoelastic properties. Viscoelastic quantities of the layer material and the mode and parameters of the growth process were acquired by fitting a spiral to the experimental data. In all the cases the growth process was mainly exponential as a function of deposition cycles, the growth exponent being between 0.250 and 0.275.

  17. Rounded stretched exponential for time relaxation functions.

    PubMed

    Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B

    2009-12-07

    A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)

  18. A nonstationary Poisson point process describes the sequence of action potentials over long time scales in lateral-superior-olive auditory neurons.

    PubMed

    Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C

    1994-01-01

    The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  20. The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Brissette, Fancois; Chen, Jie

    2013-04-01

    Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.

  1. Alphabet Soup

    ERIC Educational Resources Information Center

    Rebholz, Joachim A.

    2017-01-01

    Graphing functions is an important topic in algebra and precalculus high school courses. The functions that are usually discussed include polynomials, rational, exponential, and trigonometric functions along with their inverses. These functions can be used to teach different aspects of function theory: domain, range, monotonicity, inverse…

  2. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  4. Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.

    PubMed

    Kirby, Kris N; Santiesteban, Mariana

    2003-01-01

    Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.

  5. The many faces of the quantum Liouville exponentials

    NASA Astrophysics Data System (ADS)

    Gervais, Jean-Loup; Schnittger, Jens

    1994-01-01

    First, it is proven that the three main operator approaches to the quantum Liouville exponentials—that is the one of Gervais-Neveu (more recently developed further by Gervais), Braaten-Curtright-Ghandour-Thorn, and Otto-Weigt—are equivalent since they are related by simple basis transformations in the Fock space of the free field depending upon the zero-mode only. Second, the GN-G expressions for quantum Liouville exponentials, where the U q( sl(2)) quantum-group structure is manifest, are shown to be given by q-binomial sums over powers of the chiral fields in the J = {1}/{2} representation. Third, the Liouville exponentials are expressed as operator tau functions, whose chiral expansion exhibits a q Gauss decomposition, which is the direct quantum analogue of the classical solution of Leznov and Saveliev. It involves q exponentials of quantum-group generators with group "parameters" equal to chiral components of the quantum metric. Fourth, we point out that the OPE of the J = {1}/{2} Liouville exponential provides the quantum version of the Hirota bilinear equation.

  6. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  7. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  8. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  9. Development of the human lateral geniculate nucleus: A morphometric and computerized 3D-reconstruction study.

    PubMed

    Yamaguchi, Katsuyuki

    2018-04-04

    The lateral geniculate nucleus (LGN) is the major relay center of the visual pathway in humans. There are few quantitative data on the morphology of LGN in prenatal infants. In this study, using serial brain sections, the author investigated the morphology of this nucleus during the second half of fetal period. Eleven human brains were obtained at routine autopsy from preterm infants aged 20-39 postmenstrual weeks. After fixation, the brain was embedded en bloc in celloidin and cut serially at 30 μm in the horizontal plane. The sections were stained at regular intervals using the Klüver-Barrera method. At 20-21 weeks, the long axis of LGN declined obliquely from the vertical to horizontal plane, while a deep groove was noted on the ventro-lateral surface of the superior half. At this time, an arcuate cell-sparse zone appeared in the dorso-medial region, indicating the beginning of lamination. From 25 weeks onwards, the magnocellular and parvocellular layers were distinguishable, and the characteristic six-layered structure was recognized. The magnocellular layer covered most of the dorsal surface, and parts of the medial, lateral, and inferior surfaces but not the ventral and superior surfaces. Nuclear volume increased exponentially with age during 20-39 weeks, while the mean neuronal profile area increased linearly during 25-39 weeks. Human LGN develops a deep groove on the ventro-lateral surface at around mid-gestation, when the initial lamination is recognized in the prospective magnocellular layer. Thereafter, the nuclear volume increases with age in an exponential function. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Algebraic approach to electronic spectroscopy and dynamics.

    PubMed

    Toutounji, Mohamad

    2008-04-28

    Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponential operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a(+). While exp(a(+)) translates coherent states, exp(a(+)a(+)) operation on coherent states has always been a challenge, as a(+) has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F(tau(1),tau(2),tau(3),tau(4)), of which the optical nonlinear response function may be procured, as evaluating F(tau(1),tau(2),tau(3),tau(4)) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.

  11. A Parametric Study of Fine-scale Turbulence Mixing Noise

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James; Freund, Jonathan B.

    2002-01-01

    The present paper is a study of aerodynamic noise spectra from model functions that describe the source. The study is motivated by the need to improve the spectral shape of the MGBK jet noise prediction methodology at high frequency. The predicted spectral shape usually appears less broadband than measurements and faster decaying at high frequency. Theoretical representation of the source is based on Lilley's equation. Numerical simulations of high-speed subsonic jets as well as some recent turbulence measurements reveal a number of interesting statistical properties of turbulence correlation functions that may have a bearing on radiated noise. These studies indicate that an exponential spatial function may be a more appropriate representation of a two-point correlation compared to its Gaussian counterpart. The effect of source non-compactness on spectral shape is discussed. It is shown that source non-compactness could well be the differentiating factor between the Gaussian and exponential model functions. In particular, the fall-off of the noise spectra at high frequency is studied and it is shown that a non-compact source with an exponential model function results in a broader spectrum and better agreement with data. An alternate source model that represents the source as a covariance of the convective derivative of fine-scale turbulence kinetic energy is also examined.

  12. Diagrammatic exponentiation for products of Wilson lines

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander; Sterman, George; Sung, Ilmo

    2010-11-01

    We provide a recursive diagrammatic prescription for the exponentiation of gauge theory amplitudes involving products of Wilson lines and loops. This construction generalizes the concept of webs, originally developed for eikonal form factors and cross sections with two eikonal lines, to general soft functions in QCD and related gauge theories. Our coordinate space arguments apply to arbitrary paths for the lines.

  13. Characterization of continuously distributed cortical water diffusion rates with a stretched-exponential model.

    PubMed

    Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S

    2003-10-01

    Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.

  14. Freezing of simple systems using density functional theory

    NASA Astrophysics Data System (ADS)

    de Kuijper, A.; Vos, W. L.; Barrat, J.-L.; Hansen, J.-P.; Schouten, J. A.

    1990-10-01

    Density functional theory (DFT) has been applied to the study of the fluid-solid transition in systems with realistic potentials (soft cores and attractive forces): the purely repulsive WCA Lennard-Jones reference potential (LJT), the full Lennard-Jones potential (LJ) and the exponential-6 potential appropriate for helium and hydrogen. Three different DFT formalisms were used: the formulation of Haymet and Oxtoby (HO) and the new theories of Denton and Ashcroft (MWDA) and of Baus (MELA). The results for the melting pressure are compared with recent simulation and experimental data. The results of the HO version are always too high, the deviation increasing when going from the repulsive Lennard-Jones to the exponential-6 potential of H2. The MWDA gives too low results for the repulsive Lennard-Jones potential. At low temperatures, it fails for the full LJ potential while at high temperatures it is in good agreement. Including the attraction as a mean-field correction gives good results also for low temperatures. The MWDA results are too high for the exponential-6 potentials. The MELA fails completely for the LJT potential and the hydrogen exponential-6 potential, since it does not give a stable solid phase.

  15. Autoregressive processes with exponentially decaying probability distribution functions: applications to daily variations of a stock market index.

    PubMed

    Porto, Markus; Roman, H Eduardo

    2002-04-01

    We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.

  16. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  17. SU-E-T-398: Evaluation of Radiobiological Parameters Using Serial Tumor Imaging During Radiotherapy as An Inverse Ill-Posed Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Sandison, G; Schwartz, J

    Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less

  18. Electrostatic screening in classical Coulomb fluids: exponential or power-law decay or both? An investigation into the effect of dispersion interactions

    NASA Astrophysics Data System (ADS)

    Kjellander, Roland

    2006-04-01

    It is shown that the nature of the non-electrostatic part of the pair interaction potential in classical Coulomb fluids can have a profound influence on the screening behaviour. Two cases are compared: (i) when the non-electrostatic part equals an arbitrary finite-ranged interaction and (ii) when a dispersion r-6 interaction potential is included. A formal analysis is done in exact statistical mechanics, including an investigation of the bridge function. It is found that the Coulombic r-1 and the dispersion r-6 potentials are coupled in a very intricate manner as regards the screening behaviour. The classical one-component plasma (OCP) is a particularly clear example due to its simplicity and is investigated in detail. When the dispersion r-6 potential is turned on, the screened electrostatic potential from a particle goes from a monotonic exponential decay, exp(-κr)/r, to a power-law decay, r-8, for large r. The pair distribution function acquire, at the same time, an r-10 decay for large r instead of the exponential one. There still remains exponentially decaying contributions to both functions, but these contributions turn oscillatory when the r-6 interaction is switched on. When the Coulomb interaction is turned off but the dispersion r-6 pair potential is kept, the decay of the pair distribution function for large r goes over from the r-10 to an r-6 behaviour, which is the normal one for fluids of electroneutral particles with dispersion interactions. Differences and similarities compared to binary electrolytes are pointed out.

  19. Tachyonic quench in a free bosonic field theory

    NASA Astrophysics Data System (ADS)

    Montes, Sebastián; Sierra, Germán; Rodríguez-Laguna, Javier

    2018-02-01

    We present a characterization of a bosonic field theory driven by a free (Gaussian) tachyonic Hamiltonian. This regime is obtained from a theory describing two coupled bosonic fields after a regular quench. Relevant physical quantities such as simple correlators, entanglement entropies, and the mutual information of disconnected subregions are computed. We show that the causal structure resembles a critical (massless) quench. For short times, physical quantities also resemble critical quenches. However, exponential divergences end up dominating the dynamics in a very characteristic way. This is related to the fact that the low-frequency modes do not equilibrate. Some applications and extensions are outlined.

  20. Investigation of hyperelastic models for nonlinear elastic behavior of demineralized and deproteinized bovine cortical femur bone.

    PubMed

    Hosseinzadeh, M; Ghoreishi, M; Narooei, K

    2016-06-01

    In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Alternative analytical forms to model diatomic systems based on the deformed exponential function.

    PubMed

    da Fonsêca, José Erinaldo; de Oliveira, Heibbe Cristhian B; da Cunha, Wiliam Ferreira; Gargano, Ricardo

    2014-07-01

    Using a deformed exponential function and the molecular-orbital theory for the simplest molecular ion, two new analytical functions are proposed to represent the potential energy of ground-state diatomic systems. The quality of these new forms was tested by fitting the ab initio electronic energies of the system LiH, LiNa, NaH, RbH, KH, H2, Li2, K2, H 2 (+) , BeH(+) and Li 2 (+) . From these fits, it was verified that these new proposals are able to adequately describe homonuclear, heteronuclear and cationic diatomic systems with good accuracy. Vibrational spectroscopic constant results obtained from these two proposals are in good agreement with experimental data.

  2. A study on some urban bus transport networks

    NASA Astrophysics Data System (ADS)

    Chen, Yong-Zhou; Li, Nan; He, Da-Ren

    2007-03-01

    In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.

  3. Effect of exponential density transition on self-focusing of q-Gaussian laser beam in collisionless plasma

    NASA Astrophysics Data System (ADS)

    Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.

    2018-05-01

    In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.

  4. Compressed exponential relaxation in liquid silicon: Universal feature of the crossover from ballistic to diffusive behavior in single-particle dynamics

    NASA Astrophysics Data System (ADS)

    Morishita, Tetsuya

    2012-07-01

    We report a first-principles molecular-dynamics study of the relaxation dynamics in liquid silicon (l-Si) over a wide temperature range (1000-2200 K). We find that the intermediate scattering function for l-Si exhibits a compressed exponential decay above 1200 K including the supercooled regime, which is in stark contrast to that for normal "dense" liquids which typically show stretched exponential decay in the supercooled regime. The coexistence of particles having ballistic-like motion and those having diffusive-like motion is demonstrated, which accounts for the compressed exponential decay in l-Si. An attempt to elucidate the crossover from the ballistic to the diffusive regime in the "time-dependent" diffusion coefficient is made and the temperature-independent universal feature of the crossover is disclosed.

  5. Building an Understanding of Functions: A Series of Activities for Pre-Calculus

    ERIC Educational Resources Information Center

    Carducci, Olivia M.

    2008-01-01

    Building block toys can be used to illustrate various concepts connected with functions including graphs and rates of change of linear and exponential functions, piecewise functions, and composition of functions. Five brief activities suitable for a pre-calculus course are described.

  6. Basis convergence of range-separated density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franck, Odile, E-mail: odile.franck@etu.upmc.fr; Mussard, Bastien, E-mail: bastien.mussard@upmc.fr; CNRS, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris

    2015-02-21

    Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. Wemore » study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N{sub 2}, and H{sub 2}O) with cardinal number X of the Dunning basis sets cc − p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.« less

  7. Effects of topologies on signal propagation in feedforward networks

    NASA Astrophysics Data System (ADS)

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  8. Effects of topologies on signal propagation in feedforward networks.

    PubMed

    Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu

    2018-01-01

    We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.

  9. Mechanical analysis of non-uniform bi-directional functionally graded intelligent micro-beams using modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Bakhshi Khaniki, Hossein; Rajasekaran, Sundaramoorthy

    2018-05-01

    This study develops a comprehensive investigation on mechanical behavior of non-uniform bi-directional functionally graded beam sensors in the framework of modified couple stress theory. Material variation is modelled through both length and thickness directions using power-law, sigmoid and exponential functions. Moreover, beam is assumed with linear, exponential and parabolic cross-section variation through the length using power-law and sigmoid varying functions. Using these assumptions, a general model for microbeams is presented and formulated by employing Hamilton’s principle. Governing equations are solved using a mixed finite element method with Lagrangian interpolation technique, Gaussian quadrature method and Wilson’s Lagrangian multiplier method. It is shown that by using bi-directional functionally graded materials in nonuniform microbeams, mechanical behavior of such structures could be affected noticeably and scale parameter has a significant effect in changing the rigidity of nonuniform bi-directional functionally graded beams.

  10. The mechanism of double-exponential growth in hyper-inflation

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  11. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  12. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Global exponential stability analysis on impulsive BAM neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Li, Yao-Tang; Yang, Chang-Bo

    2006-12-01

    Using M-matrix and topological degree tool, sufficient conditions are obtained for the existence, uniqueness and global exponential stability of the equilibrium point of bidirectional associative memory (BAM) neural networks with distributed delays and subjected to impulsive state displacements at fixed instants of time by constructing a suitable Lyapunov functional. The results remove the usual assumptions that the boundedness, monotonicity, and differentiability of the activation functions. It is shown that in some cases, the stability criteria can be easily checked. Finally, an illustrative example is given to show the effectiveness of the presented criteria.

  14. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  15. The evolution of pattern camouflage strategies in waterfowl and game birds.

    PubMed

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-05-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions ("bimodal" patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a "sit and hide" strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution.

  16. The evolution of pattern camouflage strategies in waterfowl and game birds

    PubMed Central

    Marshall, Kate L A; Gluckman, Thanh-Lan

    2015-01-01

    Visual patterns are common in animals. A broad survey of the literature has revealed that different patterns have distinct functions. Irregular patterns (e.g., stipples) typically function in static camouflage, whereas regular patterns (e.g., stripes) have a dual function in both motion camouflage and communication. Moreover, irregular and regular patterns located on different body regions (“bimodal” patterning) can provide an effective compromise between camouflage and communication and/or enhanced concealment via both static and motion camouflage. Here, we compared the frequency of these three pattern types and traced their evolutionary history using Bayesian comparative modeling in aquatic waterfowl (Anseriformes: 118 spp.), which typically escape predators by flight, and terrestrial game birds (Galliformes: 170 spp.), which mainly use a “sit and hide” strategy to avoid predation. Given these life histories, we predicted that selection would favor regular patterning in Anseriformes and irregular or bimodal patterning in Galliformes and that pattern function complexity should increase over the course of evolution. Regular patterns were predominant in Anseriformes whereas regular and bimodal patterns were most frequent in Galliformes, suggesting that patterns with multiple functions are broadly favored by selection over patterns with a single function in static camouflage. We found that the first patterns to evolve were either regular or bimodal in Anseriformes and either irregular or regular in Galliformes. In both orders, irregular patterns could evolve into regular patterns but not the reverse. Our hypothesis of increasing complexity in pattern camouflage function was supported in Galliformes but not in Anseriformes. These results reveal a trajectory of pattern evolution linked to increasing function complexity in Galliformes although not in Anseriformes, suggesting that both ecology and function complexity can have a profound influence on pattern evolution. PMID:26045950

  17. Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Jiang, Peng; Peng, Lihui; Xiao, Deyun

    2007-06-01

    This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.

  18. On the origin of non-exponential fluorescence decays in enzyme-ligand complex

    NASA Astrophysics Data System (ADS)

    Wlodarczyk, Jakub; Kierdaszuk, Borys

    2004-05-01

    Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).

  19. Identification of Nanoparticle Prototypes and Archetypes.

    PubMed

    Fernandez, Michael; Barnard, Amanda S

    2015-12-22

    High-throughput (HT) computational characterization of nanomaterials is poised to accelerate novel material breakthroughs. The number of possible nanomaterials is increasing exponentially along with their complexity, and so statistical and information technology will play a fundamental role in rationalizing nanomaterials HT data. We demonstrate that multivariate statistical analysis of heterogeneous ensembles can identify the truly significant nanoparticles and their most relevant properties. Virtual samples of diamond nanoparticles and graphene nanoflakes are characterized using clustering and archetypal analysis, where we find that saturated particles are defined by their geometry, while nonsaturated nanoparticles are defined by their carbon chemistry. At the complex hull of the nanostructure spaces, a combination of complex archetypes can efficiency describe a large number of members of the ensembles, whereas the regular shapes that are typically assumed to be representative can only describe a small set of the most regular morphologies. This approach provides a route toward the characterization of computationally intractable virtual nanomaterial spaces, which can aid nanomaterials discovery in the foreseen big data scenario.

  20. On-top density functionals for the short-range dynamic correlation between electrons of opposite and parallel spin

    NASA Astrophysics Data System (ADS)

    Hollett, Joshua W.; Pegoretti, Nicholas

    2018-04-01

    Separate, one-parameter, on-top density functionals are derived for the short-range dynamic correlation between opposite and parallel-spin electrons, in which the electron-electron cusp is represented by an exponential function. The combination of both functionals is referred to as the Opposite-spin exponential-cusp and Fermi-hole correction (OF) functional. The two parameters of the OF functional are set by fitting the ionization energies and electron affinities, of the atoms He to Ar, predicted by ROHF in combination with the OF functional to the experimental values. For ionization energies, the overall performance of ROHF-OF is better than completely renormalized coupled-cluster [CR-CC(2,3)] and better than, or as good as, conventional density functional methods. For electron affinities, the overall performance of ROHF-OF is less impressive. However, for both ionization energies and electron affinities of third row atoms, the mean absolute error of ROHF-OF is only 3 kJ mol-1.

  1. Deforming regular black holes

    NASA Astrophysics Data System (ADS)

    Neves, J. C. S.

    2017-06-01

    In this work, we have deformed regular black holes which possess a general mass term described by a function which generalizes the Bardeen and Hayward mass functions. By using linear constraints in the energy-momentum tensor to generate metrics, the solutions presented in this work are either regular or singular. That is, within this approach, it is possible to generate regular or singular black holes from regular or singular black holes. Moreover, contrary to the Bardeen and Hayward regular solutions, the deformed regular black holes may violate the weak energy condition despite the presence of the spherical symmetry. Some comments on accretion of deformed black holes in cosmological scenarios are made.

  2. Beyond the usual mapping functions in GPS, VLBI and Deep Space tracking.

    NASA Astrophysics Data System (ADS)

    Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie

    2014-05-01

    We describe here a new algorithm to model the water contents of the atmosphere (including ZWD) from GPS slant wet delays relative to a single receiver. We first make the assumption that the water vapor contents are mainly governed by a scale height (exponential law), and secondly that the departures from this decaying exponential can be mapped as a set of low degree 3D Zernike functions (w.r.t. space) and Tchebyshev polynomials (w.r.t. time.) We compare this new algorithm with previous algorithms known as mapping functions in GPS, VLBI and Deep Space tracking and give an example with data acquired over a one day time span at the Geodesy Observatory of Tahiti.

  3. On the Singularity Structure of WKB Solution of the Boosted Whittaker Equation: its Relevance to Resurgent Functions with Essential Singularities

    NASA Astrophysics Data System (ADS)

    Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya

    2016-12-01

    Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.

  4. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  5. A non-Gaussian option pricing model based on Kaniadakis exponential deformation

    NASA Astrophysics Data System (ADS)

    Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara

    2017-09-01

    A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.

  6. Echo Statistics of Aggregations of Scatterers in a Random Waveguide: Application to Biologic Sonar Clutter

    DTIC Science & Technology

    2012-09-01

    used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,

  7. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  8. Dynamical analysis for a scalar-tensor model with kinetic and nonminimal couplings

    NASA Astrophysics Data System (ADS)

    Granda, L. N.; Jimenez, D. F.

    We study the autonomous system for a scalar-tensor model of dark energy with nonminimal coupling to curvature and nonminimal kinetic coupling to the Einstein tensor. The critical points describe important stable asymptotic scenarios including quintessence, phantom and de Sitter attractor solutions. Two functional forms for the coupling functions and the scalar potential were considered: power-law and exponential functions of the scalar field. For power-law couplings, the restrictions on stable quintessence and phantom solutions lead to asymptotic freedom regime for the gravitational interaction. For the exponential functions, the stable quintessence, phantom or de Sitter solutions allow asymptotic behaviors where the effective Newtonian coupling can reach either the asymptotic freedom regime or constant value. The phantom solutions could be realized without appealing to ghost degrees of freedom. Transient inflationary and radiation dominated phases can also be described.

  9. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  10. Global exponential stability for switched memristive neural networks with time-varying delays.

    PubMed

    Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia

    2016-08-01

    This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A new generalized exponential rational function method to find exact special solutions for the resonance nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Ghanbari, Behzad; Inc, Mustafa

    2018-04-01

    The present paper suggests a novel technique to acquire exact solutions of nonlinear partial differential equations. The main idea of the method is to generalize the exponential rational function method. In order to examine the ability of the method, we consider the resonant nonlinear Schrödinger equation (R-NLSE). Many variants of exact soliton solutions for the equation are derived by the proposed method. Physical interpretations of some obtained solutions is also included. One can easily conclude that the new proposed method is very efficient and finds the exact solutions of the equation in a relatively easy way.

  12. New class of control laws for robotic manipulators. I - Nonadaptive case. II - Adaptive case

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1988-01-01

    A new class of exponentially stabilizing control laws for joint level control of robot arms is discussed. Closed-loop exponential stability has been demonstrated for both the set point and tracking control problems by a slight modification of the energy Lyapunov function and the use of a lemma which handles third-order terms in the Lyapunov function derivatives. In the second part, these control laws are adapted in a simple fashion to achieve asymptotically stable adaptive control. The analysis addresses the nonlinear dynamics directly without approximation, linearization, or ad hoc assumptions, and uses a parameterization based on physical (time-invariant) quantities.

  13. Simple robust control laws for robot manipulators. Part 1: Non-adaptive case

    NASA Technical Reports Server (NTRS)

    Wen, J. T.; Bayard, D. S.

    1987-01-01

    A new class of exponentially stabilizing control laws for joint level control of robot arms is introduced. It has been recently recognized that the nonlinear dynamics associated with robotic manipulators have certain inherent passivity properties. More specifically, the derivation of the robotic dynamic equations from the Hamilton's principle gives rise to natural Lyapunov functions for control design based on total energy considerations. Through a slight modification of the energy Lyapunov function and the use of a convenient lemma to handle third order terms in the Lyapunov function derivatives, closed loop exponential stability for both the set point and tracking control problem is demonstrated. The exponential convergence property also leads to robustness with respect to frictions, bounded modeling errors and instrument noise. In one new design, the nonlinear terms are decoupled from real-time measurements which completely removes the requirement for on-line computation of nonlinear terms in the controller implementation. In general, the new class of control laws offers alternatives to the more conventional computed torque method, providing tradeoffs between robustness, computation and convergence properties. Furthermore, these control laws have the unique feature that they can be adapted in a very simple fashion to achieve asymptotically stable adaptive control.

  14. Water movement through plant roots - exact solutions of the water flow equation in roots with linear or exponential piecewise hydraulic properties

    NASA Astrophysics Data System (ADS)

    Meunier, Félicien; Couvreur, Valentin; Draye, Xavier; Zarebanadkouki, Mohsen; Vanderborght, Jan; Javaux, Mathieu

    2017-12-01

    In 1978, Landsberg and Fowkes presented a solution of the water flow equation inside a root with uniform hydraulic properties. These properties are root radial conductivity and axial conductance, which control, respectively, the radial water flow between the root surface and xylem and the axial flow within the xylem. From the solution for the xylem water potential, functions that describe the radial and axial flow along the root axis were derived. These solutions can also be used to derive root macroscopic parameters that are potential input parameters of hydrological and crop models. In this paper, novel analytical solutions of the water flow equation are developed for roots whose hydraulic properties vary along their axis, which is the case for most plants. We derived solutions for single roots with linear or exponential variations of hydraulic properties with distance to root tip. These solutions were subsequently combined to construct single roots with complex hydraulic property profiles. The analytical solutions allow one to verify numerical solutions and to get a generalization of the hydric behaviour with the main influencing parameters of the solutions. The resulting flow distributions in heterogeneous roots differed from those in uniform roots and simulations led to more regular, less abrupt variations of xylem suction or radial flux along root axes. The model could successfully be applied to maize effective root conductance measurements to derive radial and axial hydraulic properties. We also show that very contrasted root water uptake patterns arise when using either uniform or heterogeneous root hydraulic properties in a soil-root model. The optimal root radius that maximizes water uptake under a carbon cost constraint was also studied. The optimal radius was shown to be highly dependent on the root hydraulic properties and close to observed properties in maize roots. We finally used the obtained functions for evaluating the impact of root maturation versus root growth on water uptake. Very diverse uptake strategies arise from the analysis. These solutions open new avenues to investigate for optimal genotype-environment-management interactions by optimization, for example, of plant-scale macroscopic hydraulic parameters used in ecohydrogolocial models.

  15. Studies of fluid instabilities in flows of lava and debris

    NASA Technical Reports Server (NTRS)

    Fink, Jonathan H.

    1987-01-01

    At least two instabilities have been identified and utilized in lava flow studies: surface folding and gravity instability. Both lead to the development of regularly spaced structures on the surfaces of lava flows. The geometry of surface folds have been used to estimate the rheology of lava flows on other planets. One investigation's analysis assumed that lava flows have a temperature-dependent Newtonian rheology, and that the lava's viscosity decreased exponentially inward from the upper surface. The author reviews studies by other investigators on the analysis of surface folding, the analysis of Taylor instability in lava flows, and the effect of surface folding on debris flows.

  16. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  17. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  18. On exponentially suppressed corrections to BMPV black hole entropy

    NASA Astrophysics Data System (ADS)

    Lal, Shailesh; Narayan, Prithvi

    2018-05-01

    The microscopic formula for the degeneracy of BMPV black hole microstates contains a series of exponentially suppressed corrections to the leading Bekenstein Hawking expression. We identify saddle points of the quantum entropy function for the BMPV black hole which are natural counterparts to these corrections and discuss the matching of leading and next-to-leading terms from the microscopic and macroscopic sides in a limit where the black hole charges are large.

  19. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  20. Sodium 22+ washout from cultured rat cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kino, M.; Nakamura, A.; Hopp, L.

    1986-10-01

    The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less

  1. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  2. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  3. Anisotropic k-essence cosmologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chimento, Luis P.; Forte, Monica

    We investigate a Bianchi type-I cosmology with k-essence and find the set of models which dissipate the initial anisotropy. There are cosmological models with extended tachyon fields and k-essence having a constant barotropic index. We obtain the conditions leading to a regular bounce of the average geometry and the residual anisotropy on the bounce. For constant potential, we develop purely kinetic k-essence models which are dust dominated in their early stages, dissipate the initial anisotropy, and end in a stable de Sitter accelerated expansion scenario. We show that linear k-field and polynomial kinetic function models evolve asymptotically to Friedmann-Robertson-Walker cosmologies.more » The linear case is compatible with an asymptotic potential interpolating between V{sub l}{proportional_to}{phi}{sup -{gamma}{sub l}}, in the shear dominated regime, and V{sub l}{proportional_to}{phi}{sup -2} at late time. In the polynomial case, the general solution contains cosmological models with an oscillatory average geometry. For linear k-essence, we find the general solution in the Bianchi type-I cosmology when the k field is driven by an inverse square potential. This model shares the same geometry as a quintessence field driven by an exponential potential.« less

  4. Traffic sign recognition based on deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  5. Bi-periodicity evoked by periodic external inputs in delayed Cohen-Grossberg-type bidirectional associative memory networks

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Wang, Yanyan

    2010-05-01

    In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.

  6. Global exponential stability and lag synchronization for delayed memristive fuzzy Cohen-Grossberg BAM neural networks with impulses.

    PubMed

    Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar

    2018-02-01

    This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Metal-induced gap states in ferroelectric capacitors and its relationship with complex band structures

    NASA Astrophysics Data System (ADS)

    Junquera, Javier; Aguado-Puente, Pablo

    2013-03-01

    At metal-isulator interfaces, the metallic wave functions with an energy eigenvalue within the band gap decay exponentially inside the dielectric (metal-induced gap states, MIGS). These MIGS can be actually regarded as Bloch functions with an associated complex wave vector. Usually only real values of the wave vectors are discussed in text books, since infinite periodicity is assumed and, in that situation, wave functions growing exponentially in any direction would not be physically valid. However, localized wave functions with an exponential decay are indeed perfectly valid solution of the Schrodinger equation in the presence of defects, surfaces or interfaces. For this reason, properties of MIGS have been typically discussed in terms of the complex band structure of bulk materials. The probable dependence on the interface particulars has been rarely taken into account explicitly due to the difficulties to include them into the model or simulations. We aim to characterize from first-principles simulations the MIGS in realistic ferroelectric capacitors and their connection with the complex band structure of the ferroelectric material. We emphasize the influence of the real interface beyond the complex band structure of bulk materials. Financial support provided by MICINN Grant FIS2009-12721-C04-02, and by the European Union Grant No. CP-FP 228989-2 ``OxIDes''. Computer resources provided by the RES.

  8. An exponential model equation for thiamin loss in irradiated ground pork as a function of dose and temperature of irradiation

    NASA Astrophysics Data System (ADS)

    Fox, J. B.; Thayer, D. W.; Phillips, J. G.

    The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.

  9. In vivo chlorine and sodium MRI of rat brain at 21.1 T.

    PubMed

    Schepkin, Victor D; Elumalai, Malathy; Kitchen, Jason A; Qian, Chunqi; Gor'kov, Peter L; Brey, William W

    2014-02-01

    MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. MRI of (35)Cl and (23)Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. T 1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T 1a = 4.8 ms (0.7) T 1b = 24.4 ± 7 ms (0.3) and compared with sodium (T 1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of [Formula: see text] ms and [Formula: see text] ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression.

  10. The effect of zealots on the rate of consensus achievement in complex networks

    NASA Astrophysics Data System (ADS)

    Kashisaz, Hadi; Hosseini, S. Samira; Darooneh, Amir H.

    2014-05-01

    In this study, we investigate the role of zealots on the result of voting process on both scale-free and Watts-Strogatz networks. We observe that inflexible individuals are very effective in consensus achievement and also in the rate of ordering process in complex networks. Zealots make the magnetization of the system to vary exponentially with time. We obtain that on SF networks, increasing the zealots' population, Z, exponentially increases the rate of consensus achievement. The time needed for the system to reach a desired magnetization, shows a power-law dependence on Z. As well, we obtain that the decay time of the order parameter shows a power-law dependence on Z. We also investigate the role of zealots' degree on the rate of ordering process and finally, we analyze the effect of network's randomness on the efficiency of zealots. Moving from a regular to a random network, the re-wiring probability P increases. We show that with increasing P, the efficiency of zealots for reducing the consensus achievement time increases. The rate of consensus is compared with the rate of ordering for different re-wiring probabilities of WS networks.

  11. Hydrostatic equilibrium of stars without electroneutrality constraint

    NASA Astrophysics Data System (ADS)

    Krivoruchenko, M. I.; Nadyozhin, D. K.; Yudin, A. V.

    2018-04-01

    The general solution of hydrostatic equilibrium equations for a two-component fluid of ions and electrons without a local electroneutrality constraint is found in the framework of Newtonian gravity theory. In agreement with the Poincaré theorem on analyticity and in the context of Dyson's argument, the general solution is demonstrated to possess a fixed (essential) singularity in the gravitational constant G at G =0 . The regular component of the general solution can be determined by perturbation theory in G starting from a locally neutral solution. The nonperturbative component obtained using the method of Wentzel, Kramers and Brillouin is exponentially small in the inner layers of the star and grows rapidly in the outward direction. Near the surface of the star, both components are comparable in magnitude, and their nonlinear interplay determines the properties of an electro- or ionosphere. The stellar charge varies within the limits of -0.1 to 150 C per solar mass. The properties of electro- and ionospheres are exponentially sensitive to variations of the fluid densities in the central regions of the star. The general solutions of two exactly solvable stellar models without a local electroneutrality constraint are also presented.

  12. Introducing correlations into carrier transport simulations of disordered materials through seeded nucleation: impact on density of states, carrier mobility, and carrier statistics

    NASA Astrophysics Data System (ADS)

    Brown, J. S.; Shaheen, S. E.

    2018-04-01

    Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.

  13. Introducing correlations into carrier transport simulations of disordered materials through seeded nucleation: impact on density of states, carrier mobility, and carrier statistics.

    PubMed

    Brown, J S; Shaheen, S E

    2018-04-04

    Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.

  14. Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue

    PubMed Central

    Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.

    2004-01-01

    The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455

  15. Multidimensional Extension of the Generalized Chowla-Selberg Formula

    NASA Astrophysics Data System (ADS)

    Elizalde, E.

    After recalling the precise existence conditions of the zeta function of a pseudodifferential operator, and the concept of reflection formula, an exponentially convergent expression for the analytic continuation of a multidimensional inhomogeneous Epstein-type zeta function of the general form with A the p×p$ matrix of a quadratic form, a p vector and q a constant, is obtained. It is valid on the whole complex s-plane, is exponentially convergent and provides the residua at the poles explicitly. It reduces to the famous formula of Chowla and Selberg in the particular case p=2, , q=0. Some variations of the formula and physical applications are considered.

  16. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  17. An Empirical Assessment of the Form of Utility Functions

    ERIC Educational Resources Information Center

    Kirby, Kris N.

    2011-01-01

    Utility functions, which relate subjective value to physical attributes of experience, are fundamental to most decision theories. Seven experiments were conducted to test predictions of the most widely assumed mathematical forms of utility (power, log, and negative exponential), and a function proposed by Rachlin (1992). For pairs of gambles for…

  18. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  19. On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains

    NASA Astrophysics Data System (ADS)

    de Wekker, Stephan F. J.; Whiteman, C. David

    2006-06-01

    Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.

  20. A non-Boltzmannian behavior of the energy distribution for quasi-stationary regimes of the Fermi–Pasta–Ulam β system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leo, Mario, E-mail: mario.leo@le.infn.it; Leo, Rosario Antonio, E-mail: leora@le.infn.it; Tempesta, Piergiulio, E-mail: p.tempesta@fis.ucm.es

    2013-06-15

    In a recent paper [M. Leo, R.A. Leo, P. Tempesta, C. Tsallis, Phys. Rev. E 85 (2012) 031149], the existence of quasi-stationary states for the Fermi–Pasta–Ulam β system has been shown numerically, by analyzing the stability properties of the N/4-mode exact nonlinear solution. Here we study the energy distribution of the modes N/4, N/3 and N/2, when they are unstable, as a function of N and of the initial excitation energy. We observe that the classical Boltzmann weight is replaced by a different weight, expressed by a q-exponential function. -- Highlights: ► New statistical properties of the Fermi–Pasta–Ulam beta systemmore » are found. ► The energy distribution of specific observables are studied: a deviation from the standard Boltzmann behavior is found. ► A q-exponential weight should be used instead. ► The classical exponential weight is restored in the large particle limit (mesoscopic nature of the phenomenon)« less

  1. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  2. General solution of the Bagley-Torvik equation with fractional-order derivative

    NASA Astrophysics Data System (ADS)

    Wang, Z. H.; Wang, X.

    2010-05-01

    This paper investigates the general solution of the Bagley-Torvik equation with 1/2-order derivative or 3/2-order derivative. This fractional-order differential equation is changed into a sequential fractional-order differential equation (SFDE) with constant coefficients. Then the general solution of the SFDE is expressed as the linear combination of fundamental solutions that are in terms of α-exponential functions, a kind of functions that play the same role of the classical exponential function. Because the number of fundamental solutions of the SFDE is greater than 2, the general solution of the SFDE depends on more than two free (independent) constants. This paper shows that the general solution of the Bagley-Torvik equation involves actually two free constants only, and it can be determined fully by the initial displacement and initial velocity.

  3. Fine Grained Chaos in AdS2 Gravity

    NASA Astrophysics Data System (ADS)

    Haehl, Felix M.; Rozali, Moshe

    2018-03-01

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u^*. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS2 ) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2 k -point functions, characterized as being both "maximally braided" and "k -out of time order," which exhibit exponential growth until progressively longer time scales u^*(k)˜(k -1 )u^*. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  4. Fine Grained Chaos in AdS_{2} Gravity.

    PubMed

    Haehl, Felix M; Rozali, Moshe

    2018-03-23

    Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u[over ^]_{*}. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS_{2}) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2k-point functions, characterized as being both "maximally braided" and "k-out of time order," which exhibit exponential growth until progressively longer time scales u[over ^]_{*}^{(k)}∼(k-1)u[over ^]_{*}. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.

  5. Biological growth functions describe published site index curves for Lake States timber species.

    Treesearch

    Allen L. Lundgren; William A. Dolid

    1970-01-01

    Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...

  6. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  7. Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case

    NASA Astrophysics Data System (ADS)

    Raja, R.; Marshal Anthoni, S.

    2011-02-01

    This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.

  8. Global exponential stability of positive periodic solution of the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays.

    PubMed

    Zhao, Kaihong

    2018-12-01

    In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.

  9. Exploring conservative islands using correlated and uncorrelated noise

    NASA Astrophysics Data System (ADS)

    da Silva, Rafael M.; Manchein, Cesar; Beims, Marcus W.

    2018-02-01

    In this work, noise is used to analyze the penetration of regular islands in conservative dynamical systems. For this purpose we use the standard map choosing nonlinearity parameters for which a mixed phase space is present. The random variable which simulates noise assumes three distributions, namely equally distributed, normal or Gaussian, and power law (obtained from the same standard map but for other parameters). To investigate the penetration process and explore distinct dynamical behaviors which may occur, we use recurrence time statistics (RTS), Lyapunov exponents and the occupation rate of the phase space. Our main findings are as follows: (i) the standard deviations of the distributions are the most relevant quantity to induce the penetration; (ii) the penetration of islands induce power-law decays in the RTS as a consequence of enhanced trapping; (iii) for the power-law correlated noise an algebraic decay of the RTS is observed, even though sticky motion is absent; and (iv) although strong noise intensities induce an ergodic-like behavior with exponential decays of RTS, the largest Lyapunov exponent is reminiscent of the regular islands.

  10. Long-Term Evolution of Email Networks: Statistical Regularities, Predictability and Stability of Social Behaviors.

    PubMed

    Godoy-Lorite, Antonia; Guimerà, Roger; Sales-Pardo, Marta

    2016-01-01

    In social networks, individuals constantly drop ties and replace them by new ones in a highly unpredictable fashion. This highly dynamical nature of social ties has important implications for processes such as the spread of information or of epidemics. Several studies have demonstrated the influence of a number of factors on the intricate microscopic process of tie replacement, but the macroscopic long-term effects of such changes remain largely unexplored. Here we investigate whether, despite the inherent randomness at the microscopic level, there are macroscopic statistical regularities in the long-term evolution of social networks. In particular, we analyze the email network of a large organization with over 1,000 individuals throughout four consecutive years. We find that, although the evolution of individual ties is highly unpredictable, the macro-evolution of social communication networks follows well-defined statistical patterns, characterized by exponentially decaying log-variations of the weight of social ties and of individuals' social strength. At the same time, we find that individuals have social signatures and communication strategies that are remarkably stable over the scale of several years.

  11. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  12. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    NASA Astrophysics Data System (ADS)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  13. Different Types of X-Ray Bursts from GRS 1915+105 and Their Origin

    NASA Astrophysics Data System (ADS)

    Yadav, J. S.; Rao, A. R.; Agrawal, P. C.; Paul, B.; Seetha, S.; Kasturirangan, K.

    1999-06-01

    We report X-ray observations of the Galactic X-ray transient source GRS 1915+105 with the pointed proportional counters of the Indian X-ray Astronomy Experiment (IXAE) onboard the Indian satellite IRS-P3, which show remarkable richness in temporal variability. The observations were carried out on 1997 June 12-29 and August 7-10, in the energy range of 2-18 keV and revealed the presence of very intense X-ray bursts. All the observed bursts have a slow exponential rise, a sharp linear decay, and broadly can be put in two classes: irregular and quasi-regular bursts in one class, and regular bursts in the other. The regular bursts are found to have two distinct timescales and to persist over extended durations. There is a strong correlation between the preceding quiescent time and the burst duration for the quasi-regular and irregular bursts. No such correlation is found for the regular bursts. The ratio of average flux during the burst time to the average flux during the quiescent phase is high and variable for the quasi-regular and irregular bursts, while it is low and constant for the regular bursts. We present a comprehensive picture of the various types of bursts observed in GRS 1915+105 in the light of the recent theories of advective accretion disks. We suggest that the peculiar bursts that we have seen are characteristic of the change of state of the source. The source can switch back and forth between the low-hard state and the high-soft state near critical accretion rates in a very short timescale, giving rise to the irregular and quasi-regular bursts. The fast timescale for the transition of the state is explained by invoking the appearance and disappearance of the advective disk in its viscous timescale. The periodicity of the regular bursts is explained by matching the viscous timescale with the cooling timescale of the postshock region. A test of the model is presented using the publicly available 13-60 keV RXTE/PCA data for irregular and regular bursts concurrent with our observations. It is found that the 13-60 keV flux relative to the 2-13 keV flux shows clear evidence for state change between the quiescent phase and the burst phase. The value of this ratio during burst is consistent with the values observed during the high-soft state seen on 1997 August 19, while its value during quiescent phase is consistent with the values observed during the low-hard state seen on 1997 May 8.

  14. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.

  15. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  16. Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.

    DTIC Science & Technology

    1979-12-31

    exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU

  17. Mathematical modelling of the growth of human fetus anatomical structures.

    PubMed

    Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech

    2017-09-01

    The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.

  18. H∞ control problem of linear periodic piecewise time-delay systems

    NASA Astrophysics Data System (ADS)

    Xie, Xiaochen; Lam, James; Li, Panshuo

    2018-04-01

    This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.

  19. Exponential lag function projective synchronization of memristor-based multidirectional associative memory neural networks via hybrid control

    NASA Astrophysics Data System (ADS)

    Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao

    2018-03-01

    This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.

  20. Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements

    NASA Technical Reports Server (NTRS)

    Moore, R. K. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.

  1. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  2. Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Chaudhri, Anuj; Lukes, Jennifer R.

    2010-02-01

    The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.

  3. A Study of the Thermal Environment Developed by a Traveling Slipper at High Velocity

    DTIC Science & Technology

    2013-03-01

    Power Partition Function The next partition function takes the same formulation as the powered function but now the exponent is squared. The...function and note the squared term in the exponent . 66 Equation 4.27 (4.36) Thus far the three partition functions each give a predicted...hypothesized that the function would fall somewhere between the first exponential decay function and the power function. However, by squaring the exponent

  4. In vivo chlorine and sodium MRI of rat brain at 21.1 T

    PubMed Central

    Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.

    2017-01-01

    Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497

  5. Monotonicity and Logarithmic Concavity of Two Functions Involving Exponential Function

    ERIC Educational Resources Information Center

    Liu, Ai-Qi; Li, Guo-Fu; Guo, Bai-Ni; Qi, Feng

    2008-01-01

    The function 1 divided by "x"[superscript 2] minus "e"[superscript"-x"] divided by (1 minus "e"[superscript"-x"])[superscript 2] for "x" greater than 0 is proved to be strictly decreasing. As an application of this monotonicity, the logarithmic concavity of the function "t" divided by "e"[superscript "at"] minus "e"[superscript"(a-1)""t"] for "a"…

  6. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    PubMed

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  7. Multipole Vortex Blobs (MVB): Symplectic Geometry and Dynamics.

    PubMed

    Holm, Darryl D; Jacobs, Henry O

    2017-01-01

    Vortex blob methods are typically characterized by a regularization length scale, below which the dynamics are trivial for isolated blobs. In this article, we observe that the dynamics need not be trivial if one is willing to consider distributional derivatives of Dirac delta functionals as valid vorticity distributions. More specifically, a new singular vortex theory is presented for regularized Euler fluid equations of ideal incompressible flow in the plane. We determine the conditions under which such regularized Euler fluid equations may admit vorticity singularities which are stronger than delta functions, e.g., derivatives of delta functions. We also describe the symplectic geometry associated with these augmented vortex structures, and we characterize the dynamics as Hamiltonian. Applications to the design of numerical methods similar to vortex blob methods are also discussed. Such findings illuminate the rich dynamics which occur below the regularization length scale and enlighten our perspective on the potential for regularized fluid models to capture multiscale phenomena.

  8. Trigonometric Integration without Trigonometric Functions

    ERIC Educational Resources Information Center

    Quinlan, James; Kolibal, Joseph

    2016-01-01

    Teaching techniques of integration can be tedious and often uninspired. We present an obvious but underutilized approach for finding antiderivatives of various trigonometric functions using the complex exponential representation of the sine and cosine. The purpose goes beyond providing students an alternative approach to trigonometric integrals.…

  9. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  10. Attentional modulation of desensitization to odor.

    PubMed

    Fallon, Nicholas; Giesbrecht, Timo; Stancak, Andrej

    2018-05-22

    Subjective and behavioral responsiveness to odor diminishes during prolonged exposure. The precise mechanisms underlying olfactory desensitization are not fully understood, but previous studies indicate that the phenomenon may be modulated by central-cognitive processes. The present study investigated the effect of attention on perceived intensity during exposure to a pleasant odor. A within-subjects design was utilized with 19 participants attending 2 sessions. During each session, participants continuously rated their perceived intensity of a 10-minute exposure to a pleasant fragrance administered using an olfactometer. An auditory oddball task was implemented to manipulate the focus of attention in each session. Participants were instructed to either direct their attention toward the sounds, but still to rate odor, or to focus entirely on rating the odor. Analysis revealed three 50-second time windows with significantly lower mean intensity ratings during the distraction condition. Curve fitting of the data disclosed a linear function of desensitization in the focused attention condition compared with an exponential decay function during distraction condition, indicating an increased rate of initial desensitization when attention is distracted away from the odor. In the focused-attention condition, perceived intensity demonstrated a regular pattern of odor sensitivity occurring at approximately 1-2 minutes intervals following initial desensitization. Spectral analysis of low-frequency oscillations confirmed the presence of augmented spectral power in this frequency range during focused relative to distracted conditions. The findings demonstrate for the first time modulation of odor desensitization specifically by attentional factors, exemplifying the relevance of top-down control for ongoing perception of odor.

  11. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  12. A new variation of the Buckingham exponential-6 potential with a tunable, singularity-free short-range repulsion and an adjustable long-range attraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werhahn, Jasper C.; Miliordos, Evangelos; Xantheas, Sotiris S.

    2015-01-05

    We introduce new generalized (reverting to the original) and extended (not reverting to the original) 4-parameter forms of the (B-2) Potential Energy Function (PEF) of Wang etal. (L.-P. Wang, J. Chen and T. van Voorhis, J. Chem. Theor. Comp. 9, 452 (2013)), which is itself a modification of the Buckingham exponential-6 PEF. The new forms have a tunable, singularity-free short-range repulsion and an adjustable long-range attraction. They produce fits to high quality ab initio data for the X–(H2O), X=F, Cl, Br, I and M+(H2O), M=Li, Na, K, Rb, Cs dimers that are between 1 and 2 orders of magnitude bettermore » than the original 3-parameter (B-2) and modified Buckingham exponential-6 PEFs. They are also slightly better than the 4-parameter generalized Buckingham exponential-6(gBe-6) and of comparable quality with the 4-parameter extended Morse (eM) PEFs introduced recently by us.« less

  13. Extended q -Gaussian and q -exponential distributions from gamma random variables

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  14. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  15. Safety in numbers in Australia: more walkers and bicyclists, safer walking and bicycling.

    PubMed

    Robinson, Dorothy L

    2005-04-01

    Overseas research shows that fatality and injury risks per cyclist and pedestrian are lower when there are more cyclists and pedestrians. Do Australian data follow the same exponential 'growth rule' where (Injuries)/(Amount of cycling) is proportional to ((Amount of cycling)-0.6)? Fatality and injury risks were compared using three datasets: 1) fatalities and amounts of cycling in Australian States in the 1980s; 2) fatality and injury rates over time in Western Australia as cycling levels increased; and 3) deaths, serious head injuries and other serious injuries to cyclists and pedestrians in Victoria, before and after the fall in cycling with the helmet law. In Australia, the risks of fatality and injury per cyclist are lower when cycling is more prevalent. Cycling was safest and most popular in the Australian Capital Territory (ACT), Queensland and Western Australia (WA). New South Wales residents cycled only 47% as much as residents of Queensland and WA, but had 53% more fatalities per kilometre, consistent with the growth rule prediction of 52% more for half as much cycling. Cycling also became safer in WA as more people cycled. Hospitalisation rates per 10,000 regular cyclists fell from 29 to 15, and reported deaths and serious injuries from 5.6 to 3.8 as numbers of regular cyclists increased. In Victoria, after the introduction of compulsory helmets, there was a 30% reduction in cycling and it was associated with a higher risk of death or serious injury per cyclist, outweighing any benefits of increased helmet wearing. As with overseas data, the exponential growth rule fits Australian data well. If cycling doubles, the risk per kilometre falls by about 34%; conversely, if cycling halves, the risk per kilometre will be about 52% higher. Policies that adversely influence the amount of cycling (for example, compulsory helmet legislation) should be reviewed.

  16. Cole-Davidson dynamics of simple chain models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian

    2008-10-01

    Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less

  17. Nonclassical states of light with a smooth P function

    NASA Astrophysics Data System (ADS)

    Damanet, François; Kübler, Jonas; Martin, John; Braun, Daniel

    2018-02-01

    There is a common understanding in quantum optics that nonclassical states of light are states that do not have a positive semidefinite and sufficiently regular Glauber-Sudarshan P function. Almost all known nonclassical states have P functions that are highly irregular, which makes working with them difficult and direct experimental reconstruction impossible. Here we introduce classes of nonclassical states with regular, non-positive-definite P functions. They are constructed by "puncturing" regular smooth positive P functions with negative Dirac-δ peaks or other sufficiently narrow smooth negative functions. We determine the parameter ranges for which such punctures are possible without losing the positivity of the state, the regimes yielding antibunching of light, and the expressions of the Wigner functions for all investigated punctured states. Finally, we propose some possible experimental realizations of such states.

  18. Global synchronization of memristive neural networks subject to random disturbances via distributed pinning control.

    PubMed

    Guo, Zhenyuan; Yang, Shaofu; Wang, Jun

    2016-12-01

    This paper presents theoretical results on global exponential synchronization of multiple memristive neural networks in the presence of external noise by means of two types of distributed pinning control. The multiple memristive neural networks are coupled in a general structure via a nonlinear function, which consists of a linear diffusive term and a discontinuous sign term. A pinning impulsive control law is introduced in the coupled system to synchronize all neural networks. Sufficient conditions are derived for ascertaining global exponential synchronization in mean square. In addition, a pinning adaptive control law is developed to achieve global exponential synchronization in mean square. Both pinning control laws utilize only partial state information received from the neighborhood of the controlled neural network. Simulation results are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Master-slave exponential synchronization of delayed complex-valued memristor-based neural networks via impulsive control.

    PubMed

    Li, Xiaofan; Fang, Jian-An; Li, Huiyuan

    2017-09-01

    This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. On the analytical determination of relaxation modulus of viscoelastic materials by Prony's interpolation method

    NASA Technical Reports Server (NTRS)

    Rodriguez, Pedro I.

    1986-01-01

    A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.

  2. Evaluating the Use of Problem-Based Video Podcasts to Teach Mathematics in Higher Education

    ERIC Educational Resources Information Center

    Kay, Robin; Kletskin, Ilona

    2012-01-01

    Problem-based video podcasts provide short, web-based, audio-visual explanations of how to solve specific procedural problems in subject areas such as mathematics or science. A series of 59 problem-based video podcasts covering five key areas (operations with functions, solving equations, linear functions, exponential and logarithmic functions,…

  3. Can One Take the Logarithm or the Sine of a Dimensioned Quantity or a Unit? Dimensional Analysis Involving Transcendental Functions

    ERIC Educational Resources Information Center

    Matta, Cherif F.; Massa, Lou; Gubskaya, Anna V.; Knoll, Eva

    2011-01-01

    The fate of dimensions of dimensioned quantities that are inserted into the argument of transcendental functions such as logarithms, exponentiation, trigonometric, and hyperbolic functions is discussed. Emphasis is placed on common misconceptions that are not often systematically examined in undergraduate courses of physical sciences. The argument…

  4. Regional height-diameter equations for major tree species of southwest Oregon.

    Treesearch

    H. Temesgen; D.W. Hann; V.J. Monleon

    2006-01-01

    Selected tree height and diameter functions were evaluated for their predictive abilities for major tree species of southwest Oregon. The equations included tree diameter alone, or diameter plus alternative measures of stand density and relative position. Two of the base equations were asymptotic functions, and two were exponential functional forms. The inclusion of...

  5. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  6. Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.

    PubMed

    Cejnar, M; Kobler, H; Hunyor, S N

    1993-03-01

    Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.

  7. Corn Yield and Soil Nitrous Oxide Emission under Different Fertilizer and Soil Management: A Three-Year Field Experiment in Middle Tennessee.

    PubMed

    Deng, Qi; Hui, Dafeng; Wang, Junming; Iwuozo, Stephen; Yu, Chih-Li; Jima, Tigist; Smart, David; Reddy, Chandra; Dennis, Sam

    2015-01-01

    A three-year field experiment was conducted to examine the responses of corn yield and soil nitrous oxide (N2O) emission to various management practices in middle Tennessee. The management practices include no-tillage + regular applications of urea ammonium nitrate (NT-URAN); no-tillage + regular applications of URAN + denitrification inhibitor (NT-inhibitor); no-tillage + regular applications of URAN + biochar (NT-biochar); no-tillage + 20% applications of URAN + chicken litter (NT-litter), no-tillage + split applications of URAN (NT-split); and conventional tillage + regular applications of URAN as a control (CT-URAN). Fertilizer equivalent to 217 kg N ha(-1) was applied to each of the experimental plots. Results showed that no-tillage (NT-URAN) significantly increased corn yield by 28% over the conventional tillage (CT-URAN) due to soil water conservation. The management practices significantly altered soil N2O emission, with the highest in the CT-URAN (0.48 mg N2O m(-2) h(-1)) and the lowest in the NT-inhibitor (0.20 mg N2O m(-2) h(-1)) and NT-biochar (0.16 mg N2O m(-2) h(-1)) treatments. Significant exponential relationships between soil N2O emission and water filled pore space were revealed in all treatments. However, variations in soil N2O emission among the treatments were positively correlated with the moisture sensitivity of soil N2O emission that likely reflects an interactive effect between soil properties and WFPS. Our results indicated that improved fertilizer and soil management have the potential to maintain highly productive corn yield while reducing greenhouse gas emissions.

  8. Corn Yield and Soil Nitrous Oxide Emission under Different Fertilizer and Soil Management: A Three-Year Field Experiment in Middle Tennessee

    PubMed Central

    Deng, Qi; Hui, Dafeng; Wang, Junming; Iwuozo, Stephen; Yu, Chih-Li; Jima, Tigist; Smart, David; Reddy, Chandra; Dennis, Sam

    2015-01-01

    Background A three-year field experiment was conducted to examine the responses of corn yield and soil nitrous oxide (N2O) emission to various management practices in middle Tennessee. Methodology/Principal Findings The management practices include no-tillage + regular applications of urea ammonium nitrate (NT-URAN); no-tillage + regular applications of URAN + denitrification inhibitor (NT-inhibitor); no-tillage + regular applications of URAN + biochar (NT-biochar); no-tillage + 20% applications of URAN + chicken litter (NT-litter), no-tillage + split applications of URAN (NT-split); and conventional tillage + regular applications of URAN as a control (CT-URAN). Fertilizer equivalent to 217 kg N ha-1 was applied to each of the experimental plots. Results showed that no-tillage (NT-URAN) significantly increased corn yield by 28% over the conventional tillage (CT-URAN) due to soil water conservation. The management practices significantly altered soil N2O emission, with the highest in the CT-URAN (0.48 mg N2O m-2 h-1) and the lowest in the NT-inhibitor (0.20 mg N2O m-2 h-1) and NT-biochar (0.16 mg N2O m-2 h-1) treatments. Significant exponential relationships between soil N2O emission and water filled pore space were revealed in all treatments. However, variations in soil N2O emission among the treatments were positively correlated with the moisture sensitivity of soil N2O emission that likely reflects an interactive effect between soil properties and WFPS. Conclusion/Significance Our results indicated that improved fertilizer and soil management have the potential to maintain highly productive corn yield while reducing greenhouse gas emissions. PMID:25923716

  9. Fractional Stability of Trunk Acceleration Dynamics of Daily-Life Walking: Toward a Unified Concept of Gait Stability

    PubMed Central

    Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.

    2017-01-01

    Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400

  10. Recurrence formulas for fully exponentially correlated four-body wave functions

    NASA Astrophysics Data System (ADS)

    Harris, Frank E.

    2009-03-01

    Formulas are presented for the recursive generation of four-body integrals in which the integrand consists of arbitrary integer powers (≥-1) of all the interparticle distances rij , multiplied by an exponential containing an arbitrary linear combination of all the rij . These integrals are generalizations of those encountered using Hylleraas basis functions and include all that are needed to make energy computations on the Li atom and other four-body systems with a fully exponentially correlated Slater-type basis of arbitrary quantum numbers. The only quantities needed to start the recursion are the basic four-body integral first evaluated by Fromm and Hill plus some easily evaluated three-body “boundary” integrals. The computational labor in constructing integral sets for practical computations is less than when the integrals are generated using explicit formulas obtained by differentiating the basic integral with respect to its parameters. Computations are facilitated by using a symbolic algebra program (MAPLE) to compute array index pointers and present syntactically correct FORTRAN source code as output; in this way it is possible to obtain error-free high-speed evaluations with minimal effort. The work can be checked by verifying sum rules the integrals must satisfy.

  11. Rate laws of the self-induced aggregation kinetics of Brownian particles

    NASA Astrophysics Data System (ADS)

    Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra

    2016-03-01

    In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.

  12. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  13. Geomorphic effectiveness of long profile shape and role of inherent geological controls, Ganga River Basin, India

    NASA Astrophysics Data System (ADS)

    Sonam, Sonam; Jain, Vikrant

    2017-04-01

    River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.

  14. Clustered Regularly Interspaced Short Palindromic Repeats/Cas9 Triggered Isothermal Amplification for Site-Specific Nucleic Acid Detection.

    PubMed

    Huang, Mengqi; Zhou, Xiaoming; Wang, Huiying; Xing, Da

    2018-02-06

    A novel CRISPR/Cas9 triggered isothermal exponential amplification reaction (CAS-EXPAR) strategy based on CRISPR/Cas9 cleavage and nicking endonuclease (NEase) mediated nucleic acids amplification was developed for rapid and site-specific nucleic acid detection. CAS-EXPAR was primed by the target DNA fragment produced by cleavage of CRISPR/Cas9, and the amplification reaction performed cyclically to generate a large number of DNA replicates which were detected using a real-time fluorescence monitoring method. This strategy that combines the advantages of CRISPR/Cas9 and exponential amplification showed high specificity as well as rapid amplification kinetics. Unlike conventional nucleic acids amplification reactions, CAS-EXPAR does not require exogenous primers, which often cause target-independent amplification. Instead, primers are first generated by Cas9/sgRNA directed site-specific cleavage of target and accumulated during the reaction. It was demonstrated this strategy gave a detection limit of 0.82 amol and showed excellent specificity in discriminating single-base mismatch. Moreover, the applicability of this method to detect DNA methylation and L. monocytogenes total RNA was also verified. Therefore, CAS-EXPAR may provide a new paradigm for efficient nucleic acid amplification and hold the potential for molecular diagnostic applications.

  15. Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words

    PubMed Central

    Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.

    2009-01-01

    Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645

  16. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  17. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  18. Grouping and the pitch of a mistuned fundamental component: Effects of applying simultaneous multiple mistunings to the other harmonics.

    PubMed

    Roberts, Brian; Holmes, Stephen D

    2006-12-01

    Mistuning a harmonic produces an exaggerated change in its pitch. This occurs because the component becomes inconsistent with the regular pattern that causes the other harmonics (constituting the spectral frame) to integrate perceptually. These pitch shifts were measured when the fundamental (F0) component of a complex tone (nominal F0 frequency = 200 Hz) was mistuned by +8% and -8%. The pitch-shift gradient was defined as the difference between these values and its magnitude was used as a measure of frame integration. An independent and random perturbation (spectral jitter) was applied simultaneously to most or all of the frame components. The gradient magnitude declined gradually as the degree of jitter increased from 0% to +/-40% of F0. The component adjacent to the mistuned target made the largest contribution to the gradient, but more distant components also contributed. The stimuli were passed through an auditory model, and the exponential height of the F0-period peak in the averaged summary autocorrelation function correlated well with the gradient magnitude. The fit improved when the weighting on more distant channels was attenuated by a factor of three per octave. The results are consistent with a grouping mechanism that computes a weighted average of periodicity strength across several components.

  19. Data Modeling Using Finite Differences

    ERIC Educational Resources Information Center

    Rhoads, Kathryn; Mendoza Epperson, James A.

    2017-01-01

    The Common Core State Standards for Mathematics (CCSSM) states that high school students should be able to recognize patterns of growth in linear, quadratic, and exponential functions and construct such functions from tables of data (CCSSI 2010). In their work with practicing secondary teachers, the authors found that teachers may make some tacit…

  20. University of Chicago School Mathematics Project (UCSMP) Algebra. WWC Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2009

    2009-01-01

    University of Chicago School Mathematics Project (UCSMP) Algebra is a one-year course covering three primary topics: (1) linear and quadratic expressions, sentences, and functions; (2) exponential expressions and functions; and (3) linear systems. Topics from geometry, probability, and statistics are integrated with the appropriate algebra.…

  1. Baldcypress Height-Diamter Equations and Their Prediction Confindence Intervals

    Treesearch

    Bernard R. Parresol

    1992-01-01

    Height-diameter relationships are an important component in yield estimation, stand description, and damage appraisals. A nonlinear exponential function used extensively in the northwest United States was chosen for bald cypress (Taxodium distichum (L.) Rich.). Homogeneity and normality of residuals were examined, and the function as well as the...

  2. Navier-Stokes-Voigt Equations with Memory in 3D Lacking Instantaneous Kinematic Viscosity

    NASA Astrophysics Data System (ADS)

    Di Plinio, Francesco; Giorgini, Andrea; Pata, Vittorino; Temam, Roger

    2018-04-01

    We consider a Navier-Stokes-Voigt fluid model where the instantaneous kinematic viscosity has been completely replaced by a memory term incorporating hereditary effects, in presence of Ekman damping. Unlike the classical Navier-Stokes-Voigt system, the energy balance involves the spatial gradient of the past history of the velocity rather than providing an instantaneous control on the high modes. In spite of this difficulty, we show that our system is dissipative in the dynamical systems sense and even possesses regular global and exponential attractors of finite fractal dimension. Such features of asymptotic well-posedness in absence of instantaneous high modes dissipation appear to be unique within the realm of dynamical systems arising from fluid models.

  3. The QCD form factor of heavy quarks at NNLO

    NASA Astrophysics Data System (ADS)

    Gluza, J.; Mitov, A.; Moch, S.; Riemann, T.

    2009-07-01

    We present an analytical calculation of the two-loop QCD corrections to the electromagnetic form factor of heavy quarks. The two-loop contributions to the form factor are reduced to linear combinations of master integrals, which are computed through higher orders in the parameter of dimensional regularization epsilon = (4-D)/2. Our result includes all terms of order epsilon at two loops and extends the previous literature. We apply the exponentiation of the heavy-quark form factor to derive new improved three-loop expansions in the high-energy limit. We also discuss the implications for predictions of massive n-parton amplitudes based on massless results in the limit, where the quark mass is small compared to all kinematical invariants.

  4. Significant Figure Rules for General Arithmetic Functions.

    ERIC Educational Resources Information Center

    Graham, D. M.

    1989-01-01

    Provides some significant figure rules used in chemistry including the general theoretical basis; logarithms and antilogarithms; exponentiation (with exactly known exponents); sines and cosines; and the extreme value rule. (YP)

  5. Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times

    NASA Astrophysics Data System (ADS)

    Fa, Kwok Sau

    2012-09-01

    In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.

  6. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  7. Spectrum analysis of radar life signal in the three kinds of theoretical models

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Ma, J. F.; Wang, D.

    2017-02-01

    In the single frequency continuous wave radar life detection system, based on the Doppler effect, the theory model of radar life signal is expressed by the real function, and there is a phenomenon that can't be confirmed by the experiment. When the phase generated by the distance between the measured object and the radar measuring head is л of integer times, the main frequency spectrum of life signal (respiration and heartbeat) is not existed in radar life signal. If this phase is л/2 of odd times, the main frequency spectrum of breath and heartbeat frequency is the strongest. In this paper, we use the Doppler effect as the basic theory, using three different mathematical expressions——real function, complex exponential function and Bessel's function expansion form. They are used to establish the theoretical model of radar life signal. Simulation analysis revealed that the Bessel expansion form theoretical model solve the problem of real function form. Compared with the theoretical model of the complex exponential function, the derived spectral line is greatly reduced in the theoretical model of Bessel expansion form, which is more consistent with the actual situation.

  8. Zeta Function Regularization in Casimir Effect Calculations and J. S. DOWKER's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-06-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so-called operator regularization procedure are presented.

  9. Zeta Function Regularization in Casimir Effect Calculations and J. S. Dowker's Contribution

    NASA Astrophysics Data System (ADS)

    Elizalde, Emilio

    2012-07-01

    A summary of relevant contributions, ordered in time, to the subject of operator zeta functions and their application to physical issues is provided. The description ends with the seminal contributions of Stephen Hawking and Stuart Dowker and collaborators, considered by many authors as the actual starting point of the introduction of zeta function regularization methods in theoretical physics, in particular, for quantum vacuum fluctuation and Casimir effect calculations. After recalling a number of the strengths of this powerful and elegant method, some of its limitations are discussed. Finally, recent results of the so called operator regularization procedure are presented.

  10. A new Fortran 90 program to compute regular and irregular associated Legendre functions (new version announcement)

    NASA Astrophysics Data System (ADS)

    Schneider, Barry I.; Segura, Javier; Gil, Amparo; Guan, Xiaoxu; Bartschat, Klaus

    2018-04-01

    This is a revised and updated version of a modern Fortran 90 code to compute the regular Plm (x) and irregular Qlm (x) associated Legendre functions for all x ∈(- 1 , + 1) (on the cut) and | x | > 1 and integer degree (l) and order (m). The necessity to revise the code comes as a consequence of some comments of Prof. James Bremer of the UC//Davis Mathematics Department, who discovered that there were errors in the code for large integer degree and order for the normalized regular Legendre functions on the cut.

  11. Radial profile of pressure in a storm ring current as a function of D st

    NASA Astrophysics Data System (ADS)

    Kovtyukh, A. S.

    2010-06-01

    Using satellite data obtained near the equatorial plane during 12 magnetic storms with amplitudes from -61 down to -422 nT, the dependences of maximum in L-profile of pressure ( L m) of the ring current (RC) on the current value of D st are constructed, and their analytical approximations are derived. It is established that function L m( D st ) is steeper on the phase of recovery than during the storm’s main phase. The form of the outer edge of experimental radial profiles of RC pressure is studied, and it is demonstrated to correspond to exponential growth of the total energy of RC particles on a given L shell with decreasing L. It is shown that during the storms’ main phase the ratio of plasma and magnetic field pressures at the RC maximum does not practically depend on the storm strength and L m value. This fact reflects resistance of the Earth’s magnetic field to RC expansion, and testifies that during storms the possibilities of injection to small L are limited for RC particles. During the storms’ recovery phase this ratio quickly increases with increasing L m, which reflects an increased fraction of plasma in the total pressure balance. It is demonstrated that function L m( D st ) is derived for the main phase of storms from the equations of drift motion of RC ions in electrical and magnetic fields, reflecting the dipole character of magnetic field and scale invariance of the pattern of particle convection near the RC maximum. For the recovery phase it is obtained from the Dessler-Parker-Sckopke relationship. The obtained regularities allow one to judge about the radial profile of RC pressure from ground-based magnetic measurements (data on the D st variation).

  12. How extreme are extremes?

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  13. Elastically driven intermittent microscopic dynamics in soft solids

    NASA Astrophysics Data System (ADS)

    Bouzid, Mehdi; Colombo, Jader; Barbosa, Lucas Vieira; Del Gado, Emanuela

    2017-06-01

    Soft solids with tunable mechanical response are at the core of new material technologies, but a crucial limit for applications is their progressive aging over time, which dramatically affects their functionalities. The generally accepted paradigm is that such aging is gradual and its origin is in slower than exponential microscopic dynamics, akin to the ones in supercooled liquids or glasses. Nevertheless, time- and space-resolved measurements have provided contrasting evidence: dynamics faster than exponential, intermittency and abrupt structural changes. Here we use 3D computer simulations of a microscopic model to reveal that the timescales governing stress relaxation, respectively, through thermal fluctuations and elastic recovery are key for the aging dynamics. When thermal fluctuations are too weak, stress heterogeneities frozen-in upon solidification can still partially relax through elastically driven fluctuations. Such fluctuations are intermittent, because of strong correlations that persist over the timescale of experiments or simulations, leading to faster than exponential dynamics.

  14. Exponential Synchronization of Networked Chaotic Delayed Neural Network by a Hybrid Event Trigger Scheme.

    PubMed

    Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun; Zhongyang Fei; Chaoxu Guan; Huijun Gao; Fei, Zhongyang; Guan, Chaoxu; Gao, Huijun

    2018-06-01

    This paper is concerned with the exponential synchronization for master-slave chaotic delayed neural network with event trigger control scheme. The model is established on a network control framework, where both external disturbance and network-induced delay are taken into consideration. The desired aim is to synchronize the master and slave systems with limited communication capacity and network bandwidth. In order to save the network resource, we adopt a hybrid event trigger approach, which not only reduces the data package sending out, but also gets rid of the Zeno phenomenon. By using an appropriate Lyapunov functional, a sufficient criterion for the stability is proposed for the error system with extended ( , , )-dissipativity performance index. Moreover, hybrid event trigger scheme and controller are codesigned for network-based delayed neural network to guarantee the exponential synchronization between the master and slave systems. The effectiveness and potential of the proposed results are demonstrated through a numerical example.

  15. On the Occurrence of Mass Inflation for the Einstein-Maxwell-Scalar Field System with a Cosmological Constant and an Exponential Price Law

    NASA Astrophysics Data System (ADS)

    Costa, João L.; Girão, Pedro M.; Natário, José; Silva, Jorge Drumond

    2018-03-01

    In this paper we study the spherically symmetric characteristic initial data problem for the Einstein-Maxwell-scalar field system with a positive cosmological constant in the interior of a black hole, assuming an exponential Price law along the event horizon. More precisely, we construct open sets of characteristic data which, on the outgoing initial null hypersurface (taken to be the event horizon), converges exponentially to a reference Reissner-Nördstrom black hole at infinity. We prove the stability of the radius function at the Cauchy horizon, and show that, depending on the decay rate of the initial data, mass inflation may or may not occur. In the latter case, we find that the solution can be extended across the Cauchy horizon with continuous metric and Christoffel symbols in {L^2_{loc}} , thus violating the Christodoulou-Chruściel version of strong cosmic censorship.

  16. Adult Age Differences and the Role of Cognitive Resources in Perceptual–Motor Skill Acquisition: Application of a Multilevel Negative Exponential Model

    PubMed Central

    Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali

    2010-01-01

    The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985

  17. [Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.

    PubMed

    Takacs, T; Jüttler, B

    2012-11-01

    Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.

  18. Computerized tomography with total variation and with shearlets

    NASA Astrophysics Data System (ADS)

    Garduño, Edgar; Herman, Gabor T.

    2017-04-01

    To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.

  19. Exploring Properties of HI Clouds in Dwarf Irregular Galaxies

    NASA Astrophysics Data System (ADS)

    Berger, Clara; Hunter, Deidre Ann

    2018-01-01

    Dwarf Irregular galaxies form stars and maintain exponential stellar disks at extremely low gas densities. One proposed method of maintaining such regular outer disks is scattering stars off of HI clouds. In order to understand the processes present in dwarf irregular stellar disks, we present a survey of atomic hydrogen clouds in and around a subset of representative galaxies from the LITTLE THINGS survey. We apply a cloud identification program to the 21 cm HI line emission cubes and extract masses, radii, surface densities, and distances from the center of the galaxy in the plane of the galaxy of each cloud. Our data show a wide range of clouds characterized by low surface densities but varied in mass and size. The number of clouds found and the mass of the most massive cloud show no correlation to integrated star forming rates or luminosity in these galaxies. However, they will be used as input for models of stars scattering off of HI clouds to better understand the regular stellar disks in dwarf Irregular galaxies.We acknowledge support from the National Science Foundation grant AST-1461200 to Northern Arizona University for Research Experiences for Undergraduates summer internships.

  20. Kinetics of human immunodeficiency virus budding and assembly

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Nguyen, Toan

    2009-03-01

    Human immunodeficiency virus (HIV) belongs to a large family of RNA viruses, retroviruses. Unlike budding of regular enveloped viruses, retroviruses bud concurrently with the assembly of retroviral capsids on the cell membrane. The kinetics of HIV (and other retroviruses) budding and assembly is therefore strongly affected by the elastic energy of the membrane and fundamentally different from regular viruses. The main result of this work shows that the kinetics is tunable from a fast budding process to a slow and effectively trapped partial budding process, by varying the attractive energy of retroviral proteins (call Gags), relative to the membrane elastic energy. When the Gag-Gag attraction is relatively high, the membrane elastic energy provides a kinetic barrier for the two pieces of the partial capsids to merge. This energy barrier determines the slowest step in the kinetics and the budding time. In the opposite limit, the membrane elastic energy provides not only a kinetic energy barrier, but a free energy barrier. The budding and assembly is effectively trapped at local free energy minimum, corresponding to a partially budded state. The time scale to escape from this metastable state is exponentially large. In both cases, our result fit with experimental measurements pretty well.

  1. LP-stability for the strong solutions of the Navier-Stokes equations in the whole space

    NASA Astrophysics Data System (ADS)

    Beiraodaveiga, H.; Secchi, P.

    1985-10-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.

  2. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  3. Optimal Pulse Configuration Design for Heart Stimulation. A Theoretical, Numerical and Experimental Study.

    NASA Astrophysics Data System (ADS)

    Hardy, Neil; Dvir, Hila; Fenton, Flavio

    Existing pacemakers consider the rectangular pulse to be the optimal form of stimulation current. However, other waveforms for the use of pacemakers could save energy while still stimulating the heart. We aim to find the optimal waveform for pacemaker use, and to offer a theoretical explanation for its advantage. Since the pacemaker battery is a charge source, here we probe the stimulation current waveforms with respect to the total charge delivery. In this talk we present theoretical analysis and numerical simulations of myocyte ion-channel currents acting as an additional source of charge that adds to the external stimulating charge for stimulation purposes. Therefore, we find that as the action potential emerges, the external stimulating current can be reduced accordingly exponentially. We then performed experimental studies in rabbit and cat hearts and showed that indeed exponential truncated pulses with less total charge can still induce activation in the heart. From the experiments, we present curves showing the savings in charge as a function of exponential waveform and we calculated that the longevity of the pacemaker battery would be ten times higher for the exponential current compared to the rectangular waveforms. Thanks to Petit Undergraduate Research Scholars Program and NSF# 1413037.

  4. The shock waves in decaying supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.

    2000-04-01

    We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.

  5. Regularity Results for a Class of Functionals with Non-Standard Growth

    NASA Astrophysics Data System (ADS)

    Acerbi, Emilio; Mingione, Giuseppe

    We consider the integral functional under non-standard growth assumptions that we call p(x) type: namely, we assume that a relevant model case being the functional Under sharp assumptions on the continuous function p(x)>1 we prove regularity of minimizers. Energies exhibiting this growth appear in several models from mathematical physics.

  6. Algebraic approach to electronic spectroscopy and dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toutounji, Mohamad

    Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponentialmore » operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a{sup +}. While exp(a{sup +}) translates coherent states, exp(a{sup +}a{sup +}) operation on coherent states has always been a challenge, as a{sup +} has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}), of which the optical nonlinear response function may be procured, as evaluating F({tau}{sub 1},{tau}{sub 2},{tau}{sub 3},{tau}{sub 4}) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.« less

  7. Existence and global exponential stability of periodic solution to BAM neural networks with periodic coefficients and continuously distributed delays

    NASA Astrophysics Data System (ADS)

    Zhou, distributed delays [rapid communication] T.; Chen, A.; Zhou, Y.

    2005-08-01

    By using the continuation theorem of coincidence degree theory and Liapunov function, we obtain some sufficient criteria to ensure the existence and global exponential stability of periodic solution to the bidirectional associative memory (BAM) neural networks with periodic coefficients and continuously distributed delays. These results improve and generalize the works of papers [J. Cao, L. Wang, Phys. Rev. E 61 (2000) 1825] and [Z. Liu, A. Chen, J. Cao, L. Huang, IEEE Trans. Circuits Systems I 50 (2003) 1162]. An example is given to illustrate that the criteria are feasible.

  8. Observational constraints on tachyonic chameleon dark energy model

    NASA Astrophysics Data System (ADS)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  9. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Synchronised firing patterns in a random network of adaptive exponential integrate-and-fire neuron model.

    PubMed

    Borges, F S; Protachevicz, P R; Lameu, E L; Bonetti, R C; Iarosz, K C; Caldas, I L; Baptista, M S; Batista, A M

    2017-06-01

    We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron. The simulations show that bursting synchronisation is more robust than spike synchronisation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Cosmological models with a hybrid scale factor in an extended gravity theory

    NASA Astrophysics Data System (ADS)

    Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan

    2018-03-01

    A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.

  12. Individual tree-diameter growth model for the Northeastern United States

    Treesearch

    Richard M. Teck; Donald E. Hilt

    1991-01-01

    Describes a distance-independent individual-tree diameter growth model for the Northeastern United States. Diameter growth is predicted in two steps using a two parameter, sigmoidal growth function modified by a one parameter exponential decay function with species-specific coefficients. Coefficients are presented for 28 species groups. The model accounts for...

  13. Calculus of Elementary Functions, Part II. Student Text. Revised Edition.

    ERIC Educational Resources Information Center

    Herriot, Sarah T.; And Others

    This course is intended for students who have a thorough knowledge of college preparatory mathematics, including algebra, axiomatic geometry, trigonometry, and analytic geometry. This text, Part II, contains material designed to follow Part I. Chapters included in this text are: (6) Derivatives of Exponential and Related Functions; (7) Area and…

  14. Chem Ed Compacts

    ERIC Educational Resources Information Center

    Wolf, Walter A., Ed.

    1976-01-01

    Presents three activities: (1) the investigation of the purity and stability of nicotinamide and flavin coenzymes; (2) desk-computer fitting of a two-exponential function; and (3) an interesting and inexpensive solubility product experiment for introductory chemistry. (RH)

  15. Changes in functional connectivity within the fronto-temporal brain network induced by regular and irregular Russian verb production

    PubMed Central

    Kireev, Maxim; Slioussar, Natalia; Korotkov, Alexander D.; Chernigovskaya, Tatiana V.; Medvedev, Svyatoslav V.

    2015-01-01

    Functional connectivity between brain areas involved in the processing of complex language forms remains largely unexplored. Contributing to the debate about neural mechanisms underlying regular and irregular inflectional morphology processing in the mental lexicon, we conducted an fMRI experiment in which participants generated forms from different types of Russian verbs and nouns as well as from nonce stimuli. The data were subjected to a whole brain voxel-wise analysis of context dependent changes in functional connectivity [the so-called psychophysiological interaction (PPI) analysis]. Unlike previously reported subtractive results that reveal functional segregation between brain areas, PPI provides complementary information showing how these areas are functionally integrated in a particular task. To date, PPI evidence on inflectional morphology has been scarce and only available for inflectionally impoverished English verbs in a same-different judgment task. Using PPI here in conjunction with a production task in an inflectionally rich language, we found that functional connectivity between the left inferior frontal gyrus (LIFG) and bilateral superior temporal gyri (STG) was significantly greater for regular real verbs than for irregular ones. Furthermore, we observed a significant positive covariance between the number of mistakes in irregular real verb trials and the increase in functional connectivity between the LIFG and the right anterior cingulate cortex in these trails, as compared to regular ones. Our results therefore allow for dissociation between regularity and processing difficulty effects. These results, on the one hand, shed new light on the functional interplay within the LIFG-bilateral STG language-related network and, on the other hand, call for partial reconsideration of some of the previous findings while stressing the role of functional temporo-frontal connectivity in complex morphological processes. PMID:25741262

  16. Two-Loop Gell-Mann Function for General Renormalizable N = 1 Supersymmetric Theory, Regularized by Higher Derivatives

    NASA Astrophysics Data System (ADS)

    Shevtsova, Ekaterina

    2011-10-01

    For the general renormalizable N=1 supersymmetric Yang-Mills theory, regularized by higher covariant derivatives, a two-loop β-function is calculated. It is shown that all integrals, needed for its obtaining are integrals of total derivatives.

  17. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    NASA Astrophysics Data System (ADS)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  18. Thermal dynamics on the lattice with exponentially improved accuracy

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    We present a novel simulation prescription for thermal quantum fields on a lattice that operates directly in imaginary frequency space. By distinguishing initial conditions from quantum dynamics it provides access to correlation functions also outside of the conventional Matsubara frequencies ωn = 2 πnT. In particular it resolves their frequency dependence between ω = 0 and ω1 = 2 πT, where the thermal physics ω ∼ T of e.g. transport phenomena is dominantly encoded. Real-time spectral functions are related to these correlators via an integral transform with rational kernel, so that their unfolding from the novel simulation data is exponentially improved compared to standard Euclidean simulations. We demonstrate this improvement within a non-trivial 0 + 1-dimensional quantum mechanical toy-model and show that spectral features inaccessible in standard Euclidean simulations are quantitatively captured.

  19. Memory feedback PID control for exponential synchronisation of chaotic Lur'e systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ruimei; Zeng, Deqiang; Zhong, Shouming; Shi, Kaibo

    2017-09-01

    This paper studies the problem of exponential synchronisation of chaotic Lur'e systems (CLSs) via memory feedback proportional-integral-derivative (PID) control scheme. First, a novel augmented Lyapunov-Krasovskii functional (LKF) is constructed, which can make full use of the information on time delay and activation function. Second, improved synchronisation criteria are obtained by using new integral inequalities, which can provide much tighter bounds than what the existing integral inequalities can produce. In comparison with existing results, in which only proportional control or proportional derivative (PD) control is used, less conservative results are derived for CLSs by PID control. Third, the desired memory feedback controllers are designed in terms of the solution to linear matrix inequalities. Finally, numerical simulations of Chua's circuit and neural network are provided to show the effectiveness and advantages of the proposed results.

  20. n-Iterative Exponential Forgetting Factor for EEG Signals Parameter Estimation

    PubMed Central

    Palma Orozco, Rosaura

    2018-01-01

    Electroencephalograms (EEG) signals are of interest because of their relationship with physiological activities, allowing a description of motion, speaking, or thinking. Important research has been developed to take advantage of EEG using classification or predictor algorithms based on parameters that help to describe the signal behavior. Thus, great importance should be taken to feature extraction which is complicated for the Parameter Estimation (PE)–System Identification (SI) process. When based on an average approximation, nonstationary characteristics are presented. For PE the comparison of three forms of iterative-recursive uses of the Exponential Forgetting Factor (EFF) combined with a linear function to identify a synthetic stochastic signal is presented. The one with best results seen through the functional error is applied to approximate an EEG signal for a simple classification example, showing the effectiveness of our proposal. PMID:29568310

  1. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  2. 15-digit accuracy calculations of Chandrasekhar's H-function for isotropic scattering by means of the double exponential formula

    NASA Astrophysics Data System (ADS)

    Kawabata, Kiyoshi

    2016-12-01

    This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.

  3. The acquisition of conditioned responding.

    PubMed

    Harris, Justin A

    2011-04-01

    This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.

  4. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  5. Training Regular Education Personnel To Be Special Education Consultants to Other Regular Education Personnel in Rural Settings.

    ERIC Educational Resources Information Center

    McIntosh, Dean K.; Raymond, Gail I.

    The Program for Exceptional Children of the University of South Carolina developed a project to address the need for an improved service delivery model for handicapped students in rural South Carolina. The project trained regular elementary teachers at the master's degree level to function as consultants to other regular classroom teachers with…

  6. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  7. Piecewise exponential models to assess the influence of job-specific experience on the hazard of acute injury for hourly factory workers

    PubMed Central

    2013-01-01

    Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648

  8. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  9. Numerical Differentiation of Noisy, Nonsmooth Data

    DOE PAGES

    Chartrand, Rick

    2011-01-01

    We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.

  10. Diffusive processes in a stochastic magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Vlad, M.; Vanden Eijnden, E.

    1995-05-01

    The statistical representation of a fluctuating (stochastic) magnetic field configuration is studied in detail. The Eulerian correlation functions of the magnetic field are determined, taking into account all geometrical constraints: these objects form a nondiagonal matrix. The Lagrangian correlations, within the reasonable Corrsin approximation, are reduced to a single scalar function, determined by an integral equation. The mean square perpendicular deviation of a geometrical point moving along a perturbed field line is determined by a nonlinear second-order differential equation. The separation of neighboring field lines in a stochastic magnetic field is studied. We find exponentiation lengths of both signs describing,more » in particular, a decay (on the average) of any initial anisotropy. The vanishing sum of these exponentiation lengths ensures the existence of an invariant which was overlooked in previous works. Next, the separation of a particle`s trajectory from the magnetic field line to which it was initially attached is studied by a similar method. Here too an initial phase of exponential separation appears. Assuming the existence of a final diffusive phase, anomalous diffusion coefficients are found for both weakly and strongly collisional limits. The latter is identical to the well known Rechester-Rosenbluth coefficient, which is obtained here by a more quantitative (though not entirely deductive) treatment than in earlier works.« less

  11. μ SR studies of the extended kagome systems YBaCo4O7+δ (δ = 0 and 0.1)

    NASA Astrophysics Data System (ADS)

    Lee, Suheon; Lee, Wonjun; Mitchell, John; Choi, Kwang-Yong

    We present a μSR study of the extended kagome systems YBaCo4O7+δ (δ = 0 and 0.1), which are made up of an alternating stacking of triangular and kagome layers. The parent material YBaCo4O7.0 undergoes a structural phase transition at 310 K, releasing geometrical frustration and thereby stabilizing an antiferromagnetically ordered state below TN = 106 K. The μSR spectra of YBaCo4O7.0 exhibit the loss of initial asymmetry and the development of a fast relaxation component below TN = 111 K. This indicates that the Co spins in the kagome planes remain in an inhomogeneous and dynamically fluctuating state down to 4 K, while the triangular spins order antiferromagnetically below TN. The nonstoichiometric YBaCo4O7.1 compound with no magnetic ordering exhibits a disparate spin dynamics between the fast cooling (10 K/min) and slow cooling (1 K/min) procedures. While the fast-cooled μSR spectra show a simple exponential decay, the slow-cooled spectra are described with a sum of a simple exponential function and a stretched exponential function. These are in agreements with the occurrence of the phase separation between interstitial oxygen-rich and poor regions in the slow-cooling measurements.

  12. Estimation of the light field inside photosynthetic microorganism cultures through Mittag-Leffler functions at depleted light conditions

    NASA Astrophysics Data System (ADS)

    Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto

    2018-01-01

    Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.

  13. Stellar Surface Brightness Profiles of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Herrmann, K. A.

    2014-03-01

    Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, or the light falls off with one exponential out to a break radius and then falls off (II) more steeply (“truncated”), or (III) less steeply (“anti-truncated”). Why there are three different radial profile types is still a mystery, including why light falls off as an exponential at all. Profile breaks are also found in dwarf disks, but some dwarf Type IIs are flat or increasing (FI) out to a break before falling off. I have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2004, 2006). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and Hα from ground-based observations, and 3.6 and 4.5μm from Spitzer. Here I highlight some results from a semi-automatic fitting of this data set including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 40 dwarfs of the LITTLE THINGS subsample.

  14. From Experiment to Theory: What Can We Learn from Growth Curves?

    PubMed

    Kareva, Irina; Karev, Georgy

    2018-01-01

    Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.

  15. Effective field theory dimensional regularization

    NASA Astrophysics Data System (ADS)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  16. Theoretical analysis of oscillatory terms in lattice heat-current time correlation functions and their contributions to thermal conductivity

    NASA Astrophysics Data System (ADS)

    Pereverzev, Andrey; Sewell, Tommy

    2018-03-01

    Lattice heat-current time correlation functions for insulators and semiconductors obtained using molecular dynamics (MD) simulations exhibit features of both pure exponential decay and oscillatory-exponential decay. For some materials the oscillatory terms contribute significantly to the lattice heat conductivity calculated from the correlation functions. However, the origin of the oscillatory terms is not well understood, and their contribution to the heat conductivity is accounted for by fitting them to empirical functions. Here, a translationally invariant expression for the heat current in terms of creation and annihilation operators is derived. By using this full phonon-picture definition of the heat current and applying the relaxation-time approximation we explain, at least in part, the origin of the oscillatory terms in the lattice heat-current correlation function. We discuss the relationship between the crystal Hamiltonian and the magnitude of the oscillatory terms. A solvable one-dimensional model is used to illustrate the potential importance of terms that are omitted in the commonly used phonon-picture expression for the heat current. While the derivations are fully quantum mechanical, classical-limit expressions are provided that enable direct contact with classical quantities obtainable from MD.

  17. Benefits of regular aerobic exercise for executive functioning in healthy populations.

    PubMed

    Guiney, Hayley; Machado, Liana

    2013-02-01

    Research suggests that regular aerobic exercise has the potential to improve executive functioning, even in healthy populations. The purpose of this review is to elucidate which components of executive functioning benefit from such exercise in healthy populations. In light of the developmental time course of executive functions, we consider separately children, young adults, and older adults. Data to date from studies of aging provide strong evidence of exercise-linked benefits related to task switching, selective attention, inhibition of prepotent responses, and working memory capacity; furthermore, cross-sectional fitness data suggest that working memory updating could potentially benefit as well. In young adults, working memory updating is the main executive function shown to benefit from regular exercise, but cross-sectional data further suggest that task-switching and post error performance may also benefit. In children, working memory capacity has been shown to benefit, and cross-sectional data suggest potential benefits for selective attention and inhibitory control. Although more research investigating exercise-related benefits for specific components of executive functioning is clearly needed in young adults and children, when considered across the age groups, ample evidence indicates that regular engagement in aerobic exercise can provide a simple means for healthy people to optimize a range of executive functions.

  18. Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering

    NASA Astrophysics Data System (ADS)

    Annaby, M. H.; Asharabi, R. M.

    2018-01-01

    In a remarkable note of Chadan [Il Nuovo Cimento 39, 697-703 (1965)], the author expanded both the regular wave function and the Jost function of the quantum scattering problem using an interpolation theorem of Valiron [Bull. Sci. Math. 49, 181-192 (1925)]. These expansions have a very slow rate of convergence, and applying them to compute the zeros of the Jost function, which lead to the important bound states, gives poor convergence rates. It is our objective in this paper to introduce several efficient interpolation techniques to compute the regular wave solution as well as the Jost function and its zeros approximately. This work continues and improves the results of Chadan and other related studies remarkably. Several worked examples are given with illustrations and comparisons with existing methods.

  19. Construction of normal-regular decisions of Bessel typed special system

    NASA Astrophysics Data System (ADS)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  20. A Question of Interest

    ERIC Educational Resources Information Center

    Holley, Ann D.

    1978-01-01

    Formulas are developed which answer installment-buying questions without the use of amortization, sinking funds, or annuity tables. Applications for geometric progressions, proof by induction, solution of exponential equations, and the notion of recursive functions are displayed. (MN)

  1. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  2. Gamma-ray follow-up studies on η Carinae

    DOE PAGES

    Reitberger, K.; Reimer, O.; Reimer, A.; ...

    2012-08-01

    Observations of high-energy γ-rays recently revealed a persistent source in spatial coincidence with the binary system η Carinae. Since modulation of the observed γ-ray flux on orbital time scales has not been reported so far, an unambiguous identification was hitherto not possible. In particular, the observations made by the Fermi Large Area Telescope (LAT) posed additional questions regarding the actual emission scenario. Analyses show two energetically distinct components in the γ-ray spectrum, which are best described by an exponentially cutoff power-law function (CPL) at energies below 10 GeV and a power-law (PL) component dominant at higher energies. The increased exposuremore » in conjunction with the improved instrumental response functions of the LAT now allow us to perform a more detailed investigation of location, spectral shape, and flux time history of the observed γ-ray emission. Furthermore, we detect a weak but regular flux decrease over time. This can be understood and interpreted in a colliding-wind binary scenario for orbital modulation of the γ-ray emission. We find that the spectral shape of the γ-ray signal agrees with a single emitting particle population in combination with significant absorption by γ-γ pair production. We are able to report on the first unambiguous detection of GeV γ-ray emission from a colliding-wind massive star binary. By studying the correlation of the flux decrease with the orbital separation of the binary components allows us to predict the behaviour up to the next periastron passage in 2014.« less

  3. Post-test probability for neonatal hyperbilirubinemia based on umbilical cord blood bilirubin, direct antiglobulin test, and ABO compatibility results.

    PubMed

    Peeters, Bart; Geerts, Inge; Van Mullem, Mia; Micalessi, Isabel; Saegeman, Veroniek; Moerman, Jan

    2016-05-01

    Many hospitals opt for early postnatal discharge of newborns with a potential risk of readmission for neonatal hyperbilirubinemia. Assays/algorithms with the possibility to improve prediction of significant neonatal hyperbilirubinemia are needed to optimize screening protocols and safe discharge of neonates. This study investigated the predictive value of umbilical cord blood (UCB) testing for significant hyperbilirubinemia. Neonatal UCB bilirubin, UCB direct antiglobulin test (DAT), and blood group were determined, as well as the maternal blood group and the red blood cell antibody status. Moreover, in newborns with clinically apparent jaundice after visual assessment, plasma total bilirubin (TB) was measured. Clinical factors positively associated with UCB bilirubin were ABO incompatibility, positive DAT, presence of maternal red cell antibodies, alarming visual assessment and significant hyperbilirubinemia in the first 6 days of life. UCB bilirubin performed clinically well with an area under the receiver-operating characteristic curve (AUC) of 0.82 (95 % CI 0.80-0.84). The combined UCB bilirubin, DAT, and blood group analysis outperformed results of these parameters considered separately to detect significant hyperbilirubinemia and correlated exponentially with hyperbilirubinemia post-test probability. Post-test probabilities for neonatal hyperbilirubinemia can be calculated using exponential functions defined by UCB bilirubin, DAT, and ABO compatibility results. • The diagnostic value of the triad umbilical cord blood bilirubin measurement, direct antiglobulin testing and blood group analysis for neonatal hyperbilirubinemia remains unclear in literature. • Currently no guideline recommends screening for hyperbilirubinemia using umbilical cord blood. What is New: • Post-test probability for hyperbilirubinemia correlated exponentially with umbilical cord blood bilirubin in different risk groups defined by direct antiglobulin test and ABO blood group compatibility results. • Exponential functions can be used to calculate hyperbilirubinemia probability.

  4. Gene expression profiles of Vibrio parahaemolyticus in the early stationary phase.

    PubMed

    Meng, L; Alter, T; Aho, T; Huehn, S

    2015-09-01

    Vibrio (V.) parahaemolyticus is an aquatic bacterium capable of causing foodborne gastroenteritis. In the environment or the food chain, V. parahaemolyticus cells are usually forced into the stationary phase, the common phase for bacterial survival in the environment. So far, little is known about whole genomic expression of V. parahaemolyticus in the early stationary phase compared with the exponential growth phase. We performed whole transcriptomic profiling of V. parahaemolyticus cells in both phases (exponential and early stationary phase). Our data showed in total that 172 genes were induced in early stationary phase, while 61 genes were repressed in early stationary phase compared with the exponential phase. Three functional categories showed stable gene expression in the early stationary phase. Eleven functional categories showed that up-regulation of genes was dominant over down-regulation in the early stationary phase. Although genes related to endogenous metabolism were repressed in the early stationary phase, massive regulation of gene expression occurred in the early stationary phase, indicating the expressed gene set of V. parahaemolyticus in the early stationary phase impacts environmental survival. Vibrio (V.) parahaemolyticus is one of the main bacterial causes of foodborne intestinal infections. This bacterium usually is forced into stationary phase in the environment, which includes, e.g. seafood. When bacteria are in stationary phase, physiological changes can lead to a resistance to many stresses, including physical and chemical challenges during food processing. To the best of our knowledge, highlighting the whole genome expression changes in the early stationary phase compared with exponential phase, as well as the investigation of physiological changes of V. parahaemolyticus such as the survival mechanism in the stationary phase has been the very first study in this field. © 2015 The Society for Applied Microbiology.

  5. An Alternative Lattice Field Theory Formulation Inspired by Lattice Supersymmetry-Summary of the Formulation-

    NASA Astrophysics Data System (ADS)

    D'Adda, Alessandro; Kawamoto, Noboru; Saito, Jun

    2018-03-01

    We propose a lattice field theory formulation which overcomes some fundamental diffculties in realizing exact supersymmetry on the lattice. The Leibniz rule for the difference operator can be recovered by defining a new product on the lattice, the star product, and the chiral fermion species doublers degrees of freedom can be avoided consistently. This framework is general enough to formulate non-supersymmetric lattice field theory without chiral fermion problem. This lattice formulation has a nonlocal nature and is essentially equivalent to the corresponding continuum theory. We can show that the locality of the star product is recovered exponentially in the continuum limit. Possible regularization procedures are proposed.The associativity of the product and the lattice translational invariance of the formulation will be discussed.

  6. Applications in Data-Intensive Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Anuj R.; Adkins, Joshua N.; Baxter, Douglas J.

    2010-04-01

    This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications providemore » timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.« less

  7. Step to improve neural cryptography against flipping attacks.

    PubMed

    Zhou, Jiantao; Xu, Qinzhen; Pei, Wenjiang; He, Zhenya; Szu, Harold

    2004-12-01

    Synchronization of neural networks by mutual learning has been demonstrated to be possible for constructing key exchange protocol over public channel. However, the neural cryptography schemes presented so far are not the securest under regular flipping attack (RFA) and are completely insecure under majority flipping attack (MFA). We propose a scheme by splitting the mutual information and the training process to improve the security of neural cryptosystem against flipping attacks. Both analytical and simulation results show that the success probability of RFA on the proposed scheme can be decreased to the level of brute force attack (BFA) and the success probability of MFA still decays exponentially with the weights' level L. The synchronization time of the parties also remains polynomial with L. Moreover, we analyze the security under an advanced flipping attack.

  8. Update on antibacterial soaps: the FDA takes a second look at triclosans.

    PubMed

    Bergstrom, Kendra Gail

    2014-04-01

    In December of 2013 the Food and Drug Administration announced it would look further into the safety and efficacy of the biocide triclosan and requested further safety data as part of a new review with the Environmental Protection Agency. The use of triclosan has increased exponentially since its introduction in in 1972, to the point that 75% of commercial soap brands contain triclosan and 76% of a nationwide sample of adults and children excrete triclosan in the urine. This announcement raised an important dialog about the appropriate use of all over the counter biocides. Particular concerns include whether these biocides are more effective than regular soaps, whether they may create new drug resistant bacteria, and whether they may also act as hormone disruptors in humans or the environment.

  9. Black holes in an expanding universe.

    PubMed

    Gibbons, Gary W; Maeda, Kei-ichi

    2010-04-02

    An exact solution representing black holes in an expanding universe is found. The black holes are maximally charged and the universe is expanding with arbitrary equation of state (P = w rho with -1 < or = for all w < or = 1). It is an exact solution of the Einstein-scalar-Maxwell system, in which we have two Maxwell-type U(1) fields coupled to the scalar field. The potential of the scalar field is an exponential. We find a regular horizon, which depends on one parameter [the ratio of the energy density of U(1) fields to that of the scalar field]. The horizon is static because of the balance on the horizon between gravitational attractive force and U(1) repulsive force acting on the scalar field. We also calculate the black hole temperature.

  10. Mixed boundary-value problem for an orthotropic rectangular strip with variable coefficients of elasticity

    NASA Astrophysics Data System (ADS)

    Sargsyan, M. Z.; Poghosyan, H. M.

    2018-04-01

    A dynamical problem for a rectangular strip with variable coefficients of elasticity is solved by an asymptotic method. It is assumed that the strip is orthotropic, the elasticity coefficients are exponential functions of y, and mixed boundary conditions are posed. The solution of the inner problem is obtained using Bessel functions.

  11. Formative versus Reflective Measurement in Executive Functions: A Critique of Willoughby et al.

    ERIC Educational Resources Information Center

    Peterson, Eric; Welsh, Marilyn C.

    2014-01-01

    Research into executive functioning (EF) has indeed grown exponentially across the past few decades, but as the Willoughby et al. critique makes clear, there remain fundamental questions to be resolved. The crux of their argument is built upon an examination of the confirmatory factor analysis (CFA) approach to understanding executive processes.…

  12. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  13. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    PubMed

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  15. The Exponential Function, XI: The New Flat Earth Society.

    ERIC Educational Resources Information Center

    Bartlett, Albert A.

    1996-01-01

    Discusses issues related to perpetual population growth. Argues that if we believe that there are no limits to growth, we will have to abandon the concept of a spherical Earth which puts limits to growth. (JRH)

  16. Study on the application of NASA energy management techniques for control of a terrestrial solar water heating system

    NASA Technical Reports Server (NTRS)

    Swanson, T. D.; Ollendorf, S.

    1979-01-01

    This paper addresses the potential for enhanced solar system performance through sophisticated control of the collector loop flow rate. Computer simulations utilizing the TRNSYS solar energy program were performed to study the relative effect on system performance of eight specific control algorithms. Six of these control algorithms are of the proportional type: two are concave exponentials, two are simple linear functions, and two are convex exponentials. These six functions are typical of what might be expected from future, more advanced, controllers. The other two algorithms are of the on/off type and are thus typical of existing control devices. Results of extensive computer simulations utilizing actual weather data indicate that proportional control does not significantly improve system performance. However, it is shown that thermal stratification in the liquid storage tank may significantly improve performance.

  17. Mean-Variance portfolio optimization by using non constant mean and volatility based on the negative exponential utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, Endang; Halim, Nurfadhlina Bt Abdul; Sukono, Rusyaman, Endang; Supian, Sudradjat

    2017-03-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on the Negative Exponential Utility Function. Non constant mean analyzed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyze some stocks in Indonesia. The expected result is to get the proportion of investment in each stock analyzed

  18. Simulation and prediction of the thuringiensin abiotic degradation processes in aqueous solution by a radius basis function neural network model.

    PubMed

    Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen

    2013-04-01

    The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Charge Transport Properties in Disordered Organic Semiconductor as a Function of Charge Density: Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Shukri, Seyfan Kelil

    2017-01-01

    We have done Kinetic Monte Carlo (KMC) simulations to investigate the effect of charge carrier density on the electrical conductivity and carrier mobility in disordered organic semiconductors using a lattice model. The density of state (DOS) of the system are considered to be Gaussian and exponential. Our simulations reveal that the mobility of the charge carrier increases with charge carrier density for both DOSs. In contrast, the mobility of charge carriers decreases as the disorder increases. In addition the shape of the DOS has a significance effect on the charge transport properties as a function of density which are clearly seen. On the other hand, for the same distribution width and at low carrier density, the change occurred on the conductivity and mobility for a Gaussian DOS is more pronounced than that for the exponential DOS.

  20. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    PubMed

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Performance analysis for mixed FSO/RF Nakagami-m and Exponentiated Weibull dual-hop airborne systems

    NASA Astrophysics Data System (ADS)

    Jing, Zhao; Shang-hong, Zhao; Wei-hu, Zhao; Ke-fan, Chen

    2017-06-01

    In this paper, the performances of mixed free-space optical (FSO)/radio frequency (RF) systems are presented based on the decode-and-forward relaying. The Exponentiated Weibull fading channel with pointing error effect is adopted for the atmospheric fluctuation of FSO channel and the RF link undergoes the Nakagami-m fading. We derived the analytical expression for cumulative distribution function (CDF) of equivalent signal-to-noise ratio (SNR). The novel mathematical presentations of outage probability and average bit-error-rate (BER) are developed based on the Meijer's G function. The analytical results show an accurately match to the Monte-Carlo simulation results. The outage and BER performance for the mixed system by decode-and-forward relay are investigated considering atmospheric turbulence and pointing error condition. The effect of aperture averaging is evaluated in all atmospheric turbulence conditions as well.

  2. Path statistics, memory, and coarse-graining of continuous-time random walks on networks

    PubMed Central

    Kion-Crosby, Willow; Morozov, Alexandre V.

    2015-01-01

    Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868

  3. Understanding taxi travel patterns

    NASA Astrophysics Data System (ADS)

    Cai, Hua; Zhan, Xiaowei; Zhu, Ji; Jia, Xiaoping; Chiu, Anthony S. F.; Xu, Ming

    2016-09-01

    Taxis play important roles in modern urban transportation systems, especially in mega cities. While providing necessary amenities, taxis also significantly contribute to traffic congestion, urban energy consumption, and air pollution. Understanding the travel patterns of taxis is thus important for addressing many urban sustainability challenges. Previous research has primarily focused on examining the statistical properties of passenger trips, which include only taxi trips occupied with passengers. However, unoccupied trips are also important for urban sustainability issues because they represent potential opportunities to improve the efficiency of the transportation system. Therefore, we need to understand the travel patterns of taxis as an integrated system, instead of focusing only on the occupied trips. In this study we examine GPS trajectory data of 11,880 taxis in Beijing, China for a period of three weeks. Our results show that taxi travel patterns share similar traits with travel patterns of individuals but also exhibit differences. Trip displacement distribution of taxi travels is statistically greater than the exponential distribution and smaller than the truncated power-law distribution. The distribution of short trips (less than 30 miles) can be best fitted with power-law while long trips follow exponential decay. We use radius of gyration to characterize individual taxi's travel distance and find that it does not follow a truncated power-law as observed in previous studies. Spatial and temporal regularities exist in taxi travels. However, with increasing spatial coverage, taxi trips can exhibit dual high probability density centers.

  4. Hyperfine-induced spin relaxation of a diffusively moving carrier in low dimensions: Implications for spin transport in organic semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhitaryan, V. V.; Dobrovitski, V. V.

    2015-08-24

    The hyperfine coupling between the spin of a charge carrier and the nuclear spin bath is a predominant channel for the carrier spin relaxation in many organic semiconductors. We theoretically investigate the hyperfine-induced spin relaxation of a carrier performing a random walk on a d-dimensional regular lattice, in a transport regime typical for organic semiconductors. We show that in d=1 and 2, the time dependence of the space-integrated spin polarization P(t) is dominated by a superexponential decay, crossing over to a stretched-exponential tail at long times. The faster decay is attributed to multiple self-intersections (returns) of the random-walk trajectories, whichmore » occur more often in lower dimensions. We also show, analytically and numerically, that the returns lead to sensitivity of P(t) to external electric and magnetic fields, and this sensitivity strongly depends on dimensionality of the system (d=1 versus d=3). We investigate in detail the coordinate dependence of the time-integrated spin polarization σ(r), which can be probed in the spin-transport experiments with spin-polarized electrodes. We also demonstrate that, while σ(r) is essentially exponential, the effect of multiple self-intersections can be identified in transport measurements from the strong dependence of the spin-decay length on the external magnetic and electric fields.« less

  5. A kinetic approach to some quasi-linear laws of macroeconomics

    NASA Astrophysics Data System (ADS)

    Gligor, M.; Ignat, M.

    2002-11-01

    Some previous works have presented the data on wealth and income distributions in developed countries and have found that the great majority of population is described by an exponential distribution, which results in idea that the kinetic approach could be adequate to describe this empirical evidence. The aim of our paper is to extend this framework by developing a systematic kinetic approach of the socio-economic systems and to explain how linear laws, modelling correlations between macroeconomic variables, may arise in this context. Firstly we construct the Boltzmann kinetic equation for an idealised system composed by many individuals (workers, officers, business men, etc.), each of them getting a certain income and spending money for their needs. To each individual a certain time variable amount of money is associated this meaning him/her phase space coordinate. In this way the exponential distribution of money in a closed economy is explicitly found. The extension of this result, including states near the equilibrium, give us the possibility to take into account the regular increase of the total amount of money, according to the modern economic theories. The Kubo-Green-Onsager linear response theory leads us to a set of linear equations between some macroeconomic variables. Finally, the validity of such laws is discussed in relation with the time reversal symmetry and is tested empirically using some macroeconomic time series.

  6. Macromolecular Rate Theory (MMRT) Provides a Thermodynamics Rationale to Underpin the Convergent Temperature Response in Plant Leaf Respiration

    NASA Astrophysics Data System (ADS)

    Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.

    2017-12-01

    Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.

  7. Modulation of lens cell adhesion molecules by particle beams

    NASA Technical Reports Server (NTRS)

    McNamara, M. P.; Bjornstad, K. A.; Chang, P. Y.; Chou, W.; Lockett, S. J.; Blakely, E. A.

    2001-01-01

    Cell adhesion molecules (CAMs) are proteins which anchor cells to each other and to the extracellular matrix (ECM), but whose functions also include signal transduction, differentiation, and apoptosis. We are testing a hypothesis that particle radiations modulate CAM expression and this contributes to radiation-induced lens opacification. We observed dose-dependent changes in the expression of beta 1-integrin and ICAM-1 in exponentially-growing and confluent cells of a differentiating human lens epithelial cell model after exposure to particle beams. Human lens epithelial (HLE) cells, less than 10 passages after their initial culture from fetal tissue, were grown on bovine corneal endothelial cell-derived ECM in medium containing 15% fetal bovine serum and supplemented with 5 ng/ml basic fibroblast growth factor (FGF-2). Multiple cell populations at three different stages of differentiation were prepared for experiment: cells in exponential growth, and cells at 5 and 10 days post-confluence. The differentiation status of cells was characterized morphologically by digital image analysis, and biochemically by Western blotting using lens epithelial and fiber cell-specific markers. Cultures were irradiated with single doses (4, 8 or 12 Gy) of 55 MeV protons and, along with unirradiated control samples, were fixed using -20 degrees C methanol at 6 hours after exposure. Replicate experiments and similar experiments with helium ions are in progress. The intracellular localization of beta 1-integrin and ICAM-1 was detected by immunofluorescence using monoclonal antibodies specific for each CAM. Cells known to express each CAM were also processed as positive controls. Both exponentially-growing and confluent, differentiating cells demonstrated a dramatic proton-dose-dependent modulation (upregulation for exponential cells, downregulation for confluent cells) and a change in the intracellular distribution of the beta 1-integrin, compared to unirradiated controls. In contrast, there was a dose-dependent increase in ICAM-1 immunofluorescence in confluent, but not exponentially-growing cells. These results suggest that proton irradiation downregulates beta 1-integrin and upregulates ICAM-1, potentially contributing to cell death or to aberrant differentiation via modulation of anchorage and/or signal transduction functions. Quantification of the expression levels of the CAMs by Western analysis is in progress.

  8. Development of a local size hierarchy causes regular spacing of trees in an even-aged Abies forest: analyses using spatial autocorrelation and the mark correlation function.

    PubMed

    Suzuki, Satoshi N; Kachi, Naoki; Suzuki, Jun-Ichirou

    2008-09-01

    During the development of an even-aged plant population, the spatial distribution of individuals often changes from a clumped pattern to a random or regular one. The development of local size hierarchies in an Abies forest was analysed for a period of 47 years following a large disturbance in 1959. In 1980 all trees in an 8 x 8 m plot were mapped and their height growth after the disturbance was estimated. Their mortality and growth were then recorded at 1- to 4-year intervals between 1980 and 2006. Spatial distribution patterns of trees were analysed by the pair correlation function. Spatial correlations between tree heights were analysed with a spatial autocorrelation function and the mark correlation function. The mark correlation function was able to detect a local size hierarchy that could not be detected by the spatial autocorrelation function alone. The small-scale spatial distribution pattern of trees changed from clumped to slightly regular during the 47 years. Mortality occurred in a density-dependent manner, which resulted in regular spacing between trees after 1980. The spatial autocorrelation and mark correlation functions revealed the existence of tree patches consisting of large trees at the initial stage. Development of a local size hierarchy was detected within the first decade after the disturbance, although the spatial autocorrelation was not negative. Local size hierarchies that developed persisted until 2006, and the spatial autocorrelation became negative at later stages (after about 40 years). This is the first study to detect local size hierarchies as a prelude to regular spacing using the mark correlation function. The results confirm that use of the mark correlation function together with the spatial autocorrelation function is an effective tool to analyse the development of a local size hierarchy of trees in a forest.

  9. Increasing Accuracy of Tissue Shear Modulus Reconstruction Using Ultrasonic Strain Tensor Measurement

    NASA Astrophysics Data System (ADS)

    Sumi, C.

    Previously, we developed three displacement vector measurement methods, i.e., the multidimensional cross-spectrum phase gradient method (MCSPGM), the multidimensional autocorrelation method (MAM), and the multidimensional Doppler method (MDM). To increase the accuracies and stabilities of lateral and elevational displacement measurements, we also developed spatially variant, displacement component-dependent regularization. In particular, the regularization of only the lateral/elevational displacements is advantageous for the lateral unmodulated case. The demonstrated measurements of the displacement vector distributions in experiments using an inhomogeneous shear modulus agar phantom confirm that displacement-component-dependent regularization enables more stable shear modulus reconstruction. In this report, we also review our developed lateral modulation methods that use Parabolic functions, Hanning windows, and Gaussian functions in the apodization function and the optimized apodization function that realizes the designed point spread function (PSF). The modulations significantly increase the accuracy of the strain tensor measurement and shear modulus reconstruction (demonstrated using an agar phantom).

  10. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  11. Exponential stabilization of magnetoelastic waves in a Mindlin-Timoshenko plate by localized internal damping

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-08-01

    This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.

  12. Using neural networks to represent potential surfaces as sums of products.

    PubMed

    Manzhos, Sergei; Carrington, Tucker

    2006-11-21

    By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.

  13. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  14. Quantum discord length is enhanced while entanglement length is not by introducing disorder in a spin chain.

    PubMed

    Sadhukhan, Debasis; Roy, Sudipto Singha; Rakshit, Debraj; Prabhu, R; Sen De, Aditi; Sen, Ujjwal

    2016-01-01

    Classical correlation functions of ground states typically decay exponentially and polynomially, respectively, for gapped and gapless short-range quantum spin systems. In such systems, entanglement decays exponentially even at the quantum critical points. However, quantum discord, an information-theoretic quantum correlation measure, survives long lattice distances. We investigate the effects of quenched disorder on quantum correlation lengths of quenched averaged entanglement and quantum discord, in the anisotropic XY and XYZ spin glass and random field chains. We find that there is virtually neither reduction nor enhancement in entanglement length while quantum discord length increases significantly with the introduction of the quenched disorder.

  15. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  16. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  17. Solution of some types of differential equations: operational calculus and inverse differential operators.

    PubMed

    Zhukovsky, K

    2014-01-01

    We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.

  18. An allometric scaling relation based on logistic growth of cities

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2014-08-01

    The relationships between urban area and population size have been empirically demonstrated to follow the scaling law of allometric growth. This allometric scaling is based on exponential growth of city size and can be termed "exponential allometry", which is associated with the concepts of fractals. However, both city population and urban area comply with the course of logistic growth rather than exponential growth. In this paper, I will present a new allometric scaling based on logistic growth to solve the abovementioned problem. The logistic growth is a process of replacement dynamics. Defining a pair of replacement quotients as new measurements, which are functions of urban area and population, we can derive an allometric scaling relation from the logistic processes of urban growth, which can be termed "logistic allometry". The exponential allometric relation between urban area and population is the approximate expression of the logistic allometric equation when the city size is not large enough. The proper range of the allometric scaling exponent value is reconsidered through the logistic process. Then, a medium-sized city of Henan Province, China, is employed as an example to validate the new allometric relation. The logistic allometry is helpful for further understanding the fractal property and self-organized process of urban evolution in the right perspective.

  19. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  20. The Link between Nutrition and Physical Activity in Increasing Academic Achievement

    ERIC Educational Resources Information Center

    Asigbee, Fiona M.; Whitney, Stephen D.; Peterson, Catherine E.

    2018-01-01

    Background: Research demonstrates a link between decreased cognitive function in overweight school-aged children and improved cognitive function among students with high fitness levels and children engaging in regular physical activity (PA). The purpose of this study was to examine whether regular PA and proper nutrition together had a significant…

  1. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  2. Universal patterns of inequality

    NASA Astrophysics Data System (ADS)

    Banerjee, Anand; Yakovenko, Victor M.

    2010-07-01

    Probability distributions of money, income and energy consumption per capita are studied for ensembles of economic agents. The principle of entropy maximization for partitioning of a limited resource gives exponential distributions for the investigated variables. A non-equilibrium difference of money temperatures between different systems generates net fluxes of money and population. To describe income distribution, a stochastic process with additive and multiplicative components is introduced. The resultant distribution interpolates between exponential at the low end and power law at the high end, in agreement with the empirical data for the USA. We show that the increase in income inequality in the USA originates primarily from the increase in the income fraction going to the upper tail, which now exceeds 20% of the total income. Analyzing the data from the World Resources Institute, we find that the distribution of energy consumption per capita around the world can be approximately described by the exponential function. Comparing the data for 1990, 2000 and 2005, we discuss the effect of globalization on the inequality of energy consumption.

  3. Magnetic pattern at supergranulation scale: the void size distribution

    NASA Astrophysics Data System (ADS)

    Berrilli, F.; Scardigli, S.; Del Moro, D.

    2014-08-01

    The large-scale magnetic pattern observed in the photosphere of the quiet Sun is dominated by the magnetic network. This network, created by photospheric magnetic fields swept into convective downflows, delineates the boundaries of large-scale cells of overturning plasma and exhibits "voids" in magnetic organization. These voids include internetwork fields, which are mixed-polarity sparse magnetic fields that populate the inner part of network cells. To single out voids and to quantify their intrinsic pattern we applied a fast circle-packing-based algorithm to 511 SOHO/MDI high-resolution magnetograms acquired during the unusually long solar activity minimum between cycles 23 and 24. The computed void distribution function shows a quasi-exponential decay behavior in the range 10-60 Mm. The lack of distinct flow scales in this range corroborates the hypothesis of multi-scale motion flows at the solar surface. In addition to the quasi-exponential decay, we have found that the voids depart from a simple exponential decay at about 35 Mm.

  4. Humans Can Adopt Optimal Discounting Strategy under Real-Time Constraints

    PubMed Central

    Schweighofer, N; Shishida, K; Han, C. E; Okamoto, Y; Tanaka, S. C; Yamawaki, S; Doya, K

    2006-01-01

    Critical to our many daily choices between larger delayed rewards, and smaller more immediate rewards, are the shape and the steepness of the function that discounts rewards with time. Although research in artificial intelligence favors exponential discounting in uncertain environments, studies with humans and animals have consistently shown hyperbolic discounting. We investigated how humans perform in a reward decision task with temporal constraints, in which each choice affects the time remaining for later trials, and in which the delays vary at each trial. We demonstrated that most of our subjects adopted exponential discounting in this experiment. Further, we confirmed analytically that exponential discounting, with a decay rate comparable to that used by our subjects, maximized the total reward gain in our task. Our results suggest that the particular shape and steepness of temporal discounting is determined by the task that the subject is facing, and question the notion of hyperbolic reward discounting as a universal principle. PMID:17096592

  5. Exponential evolution: implications for intelligent extraterrestrial life.

    PubMed

    Russell, D A

    1983-01-01

    Some measures of biologic complexity, including maximal levels of brain development, are exponential functions of time through intervals of 10(6) to 10(9) yrs. Biological interactions apparently stimulate evolution but physical conditions determine the time required to achieve a given level of complexity. Trends in brain evolution suggest that other organisms could attain human levels within approximately 10(7) yrs. The number (N) and longevity (L) terms in appropriate modifications of the Drake Equation, together with trends in the evolution of biological complexity on Earth, could provide rough estimates of the prevalence of life forms at specified levels of complexity within the Galaxy. If life occurs throughout the cosmos, exponential evolutionary processes imply that higher intelligence will soon (10(9) yrs) become more prevalent than it now is. Changes in the physical universe become less rapid as time increases from the Big Bang. Changes in biological complexity may be most rapid at such later times. This lends a unique and symmetrical importance to early and late universal times.

  6. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  7. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  8. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  9. Real-time modeling of primitive environments through wavelet sensors and Hebbian learning

    NASA Astrophysics Data System (ADS)

    Vaccaro, James M.; Yaworsky, Paul S.

    1999-06-01

    Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.

  10. A two-component Matched Interface and Boundary (MIB) regularization for charge singularity in implicit solvation

    NASA Astrophysics Data System (ADS)

    Geng, Weihua; Zhao, Shan

    2017-12-01

    We present a new Matched Interface and Boundary (MIB) regularization method for treating charge singularity in solvated biomolecules whose electrostatics are described by the Poisson-Boltzmann (PB) equation. In a regularization method, by decomposing the potential function into two or three components, the singular component can be analytically represented by the Green's function, while other components possess a higher regularity. Our new regularization combines the efficiency of two-component schemes with the accuracy of the three-component schemes. Based on this regularization, a new MIB finite difference algorithm is developed for solving both linear and nonlinear PB equations, where the nonlinearity is handled by using the inexact-Newton's method. Compared with the existing MIB PB solver based on a three-component regularization, the present algorithm is simpler to implement by circumventing the work to solve a boundary value Poisson equation inside the molecular interface and to compute related interface jump conditions numerically. Moreover, the new MIB algorithm becomes computationally less expensive, while maintains the same second order accuracy. This is numerically verified by calculating the electrostatic potential and solvation energy on the Kirkwood sphere on which the analytical solutions are available and on a series of proteins with various sizes.

  11. The Adler D-function for N = 1 SQCD regularized by higher covariant derivatives in the three-loop approximation

    NASA Astrophysics Data System (ADS)

    Kataev, A. L.; Kazantsev, A. E.; Stepanyantz, K. V.

    2018-01-01

    We calculate the Adler D-function for N = 1 SQCD in the three-loop approximation using the higher covariant derivative regularization and the NSVZ-like subtraction scheme. The recently formulated all-order relation between the Adler function and the anomalous dimension of the matter superfields defined in terms of the bare coupling constant is first considered and generalized to the case of an arbitrary representation for the chiral matter superfields. The correctness of this all-order relation is explicitly verified at the three-loop level. The special renormalization scheme in which this all-order relation remains valid for the D-function and the anomalous dimension defined in terms of the renormalized coupling constant is constructed in the case of using the higher derivative regularization. The analytic expression for the Adler function for N = 1 SQCD is found in this scheme to the order O (αs2). The problem of scheme-dependence of the D-function and the NSVZ-like equation is briefly discussed.

  12. Doubling Time for Nonexponential Families of Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2010-01-01

    One special characteristic of any exponential growth or decay function f(t) = Ab[superscript t] is its unique doubling time or half-life, each of which depends only on the base "b". The half-life is used to characterize the rate of decay of any radioactive substance or the rate at which the level of a medication in the bloodstream decays as it is…

  13. Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malykhin, Yu V; Ryutin, K S

    2014-11-30

    For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.

  14. Capitalizing on the Dynamic Features of Excel to Consider Growth Rates and Limits

    ERIC Educational Resources Information Center

    Taylor, Daniel; Moore-Russo, Deborah

    2012-01-01

    It is common for both algebra and calculus instructors to use power functions of various degrees as well as exponential functions to examine and compare rates of growth. This can be done on a chalkboard, with a graphing calculator, or with a spreadsheet. Instructors often are careful to connect the symbolic and graphical (and occasionally the…

  15. Effects of Economy Type and Nicotine on the Essential Value of Food in Rats

    ERIC Educational Resources Information Center

    Cassidy, Rachel N.; Dallery, Jesse

    2012-01-01

    The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand…

  16. Application of a linked stress release model in Corinth Gulf and Central Ionian Islands (Greece)

    NASA Astrophysics Data System (ADS)

    Mangira, Ourania; Vasiliadis, Georgios; Papadimitriou, Eleftheria

    2017-06-01

    Spatio-temporal stress changes and interactions between adjacent fault segments consist of the most important component in seismic hazard assessment, as they can alter the occurrence probability of strong earthquake onto these segments. The investigation of the interactions between adjacent areas by means of the linked stress release model is attempted for moderate earthquakes ( M ≥ 5.2) in the Corinth Gulf and the Central Ionian Islands (Greece). The study areas were divided in two subareas, based on seismotectonic criteria. The seismicity of each subarea is investigated by means of a stochastic point process and its behavior is determined by the conditional intensity function, which usually gets an exponential form. A conditional intensity function of Weibull form is used for identifying the most appropriate among the models (simple, independent and linked stress release model) for the interpretation of the earthquake generation process. The appropriateness of the models was decided after evaluation via the Akaike information criterion. Despite the fact that the curves of the conditional intensity functions exhibit similar behavior, the use of the exponential-type conditional intensity function seems to fit better the data.

  17. DOUBLE-EXPONENTIAL FITTING FUNCTION FOR EVALUATION OF COSMIC-RAY-INDUCED NEUTRON FLUENCE RATE IN ARBITRARY LOCATIONS.

    PubMed

    Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie

    2017-12-01

    The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. An exponential decay model for mediation.

    PubMed

    Fritz, Matthew S

    2014-10-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, address many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed.

  19. An Exponential Decay Model for Mediation

    PubMed Central

    Fritz, Matthew S.

    2013-01-01

    Mediation analysis is often used to investigate mechanisms of change in prevention research. Results finding mediation are strengthened when longitudinal data are used because of the need for temporal precedence. Current longitudinal mediation models have focused mainly on linear change, but many variables in prevention change nonlinearly across time. The most common solution to nonlinearity is to add a quadratic term to the linear model, but this can lead to the use of the quadratic function to explain all nonlinearity, regardless of theory and the characteristics of the variables in the model. The current study describes the problems that arise when quadratic functions are used to describe all nonlinearity and how the use of nonlinear functions, such as exponential decay, addresses many of these problems. In addition, nonlinear models provide several advantages over polynomial models including usefulness of parameters, parsimony, and generalizability. The effects of using nonlinear functions for mediation analysis are then discussed and a nonlinear growth curve model for mediation is presented. An empirical example using data from a randomized intervention study is then provided to illustrate the estimation and interpretation of the model. Implications, limitations, and future directions are also discussed. PMID:23625557

  20. Function algorithms for MPP scientific subroutines, volume 1

    NASA Technical Reports Server (NTRS)

    Gouch, J. G.

    1984-01-01

    Design documentation and user documentation for function algorithms for the Massively Parallel Processor (MPP) are presented. The contract specifies development of MPP assembler instructions to perform the following functions: natural logarithm; exponential (e to the x power); square root; sine; cosine; and arctangent. To fulfill the requirements of the contract, parallel array and solar implementations for these functions were developed on the PDP11/34 Program Development and Management Unit (PDMU) that is resident at the MPP testbed installation located at the NASA Goddard facility.

  1. Investigation of the spinfoam path integral with quantum cuboid intertwiners

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Steinhaus, Sebastian

    2016-05-01

    In this work, we investigate the 4d path integral for Euclidean quantum gravity on a hypercubic lattice, as given by the spinfoam model by Engle, Pereira, Rovelli, Livine, Freidel and Krasnov. To tackle the problem, we restrict to a set of quantum geometries that reflects the large amount of lattice symmetries. In particular, the sum over intertwiners is restricted to quantum cuboids, i.e. coherent intertwiners which describe a cuboidal geometry in the large-j limit. Using asymptotic expressions for the vertex amplitude, we find several interesting properties of the state sum. First of all, the value of coupling constants in the amplitude functions determines whether geometric or nongeometric configurations dominate the path integral. Secondly, there is a critical value of the coupling constant α , which separates two phases. In both phases, the diffeomorphism symmetry appears to be broken. In one, the dominant contribution comes from highly irregular, in the other from highly regular configurations, both describing flat Euclidean space with small quantum fluctuations around them, viewed in different coordinate systems. On the critical point diffeomorphism symmetry is nearly restored, however. Thirdly, we use the state sum to compute the physical norm of kinematical states, i.e. their norm in the physical Hilbert space. We find that states which describe boundary geometry with high torsion have an exponentially suppressed physical norm. We argue that this allows one to exclude them from the state sum in calculations.

  2. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Year-round measurements of CH4 exchange in a forested drained peatland using automated chambers

    NASA Astrophysics Data System (ADS)

    Korkiakoski, Mika; Koskinen, Markku; Penttilä, Timo; Arffman, Pentti; Ojanen, Paavo; Minkkinen, Kari; Laurila, Tuomas; Lohila, Annalea

    2016-04-01

    Pristine peatlands are usually carbon accumulating ecosystems and sources of methane (CH4). Draining peatlands for forestry increases the thickness of the oxic layer, thus enhancing CH4 oxidation which leads to decreased CH4 emissions. Closed chambers are commonly used in estimating the greenhouse gas exchange between the soil and the atmosphere. However, the closed chamber technique alters the gas concentration gradient making the concentration development against time non-linear. Selecting the correct fitting method is important as it can be the largest source of uncertainty in flux calculation. We measured CH4 exchange rates and their diurnal and seasonal variations in a nutrient-rich drained peatland located in southern Finland. The original fen was drained for forestry in 1970s and now the tree stand is a mixture of Scots pine, Norway spruce and Downy birch. Our system consisted of six transparent polycarbonate chambers and stainless steel frames, positioned on different types of field and moss layer. During winter, the frame was raised above the snowpack with extension collars and the height of the snowpack inside the chamber was measured regularly. The chambers were closed hourly and the sample gas was sucked into a cavity ring-down spectrometer and analysed for CH4, CO2 and H2O concentration with 5 second time resolution. The concentration change in time in the beginning of a closure was determined with linear and exponential fits. The results show that linear regression systematically underestimated the CH4 flux when compared to exponential regression by 20-50 %. On the other hand, the exponential regression seemed not to work reliably with small fluxes (< 3.5 μg CH4 m-2 h-1): using exponential regression in such cases typically resulted in anomalously large fluxes and high deviation. Due to these facts, we recommend first calculating the flux with the linear regression and, if the flux is high enough, calculate the flux again using the exponential regression and use this value in later analysis. The forest floor at the site (including the ground vegetation) acted as a CH4 sink most of the time. CH4 emission peaks were occasionally observed, particularly in spring during the snow melt, and during rainfall events in summer. Diurnal variation was observed mainly in summer. The net CH4 exchange for the two year measurement period in the six chambers varied from -31 to -155 mg CH4 m-2 yr-1, the average being -67 mg CH4 m-2 yr-1. However, this does not include the ditches which typically act as a significant source for CH4.

  4. Voluntary leadership roles in religious groups and rates of change in functional status during older adulthood

    PubMed Central

    Krause, Neal

    2013-01-01

    Linear growth curve modeling was used to compare rates of change in functional status between three groups of older adults: Individuals holding voluntary lay leadership positions in a church, regular church attenders who were not leaders, and those not regularly attending church. Functional status was tracked longitudinally over a 4-year period in a national sample of 1,152 Black and White older adults whose religious backgrounds were either Christian or unaffiliated. Leaders had significantly slower trajectories of increase in both the number of physical impairments and the severity of those impairments. Although regular church attenders who were not leaders had lower mean levels of impairment on both measures, compared with those not regularly attending church, the two groups of non-leaders did not differ from one another in their rates of impairment increase. Leadership roles may contribute to longer maintenance of physical ability in late life, and opportunities for voluntary leadership may help account for some of the health benefits of religious participation. PMID:23606309

  5. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  6. Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cross, R.J.

    1985-12-01

    A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less

  7. Stellar Surface Brightness Profiles of Dwarf Galaxies

    NASA Astrophysics Data System (ADS)

    Herrmann, Kimberly A.; LITTLE THINGS Team

    2012-01-01

    Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, (II) truncated: the light falls off with one exponential out to a break radius and then falls off more steeply, and (III) anti-truncated: the light falls off with one exponential out to a break radius and then falls off less steeply. Stellar surface brightness profile breaks are also found in dwarf disk galaxies, but with an additional category: (FI) flat-inside: the light is roughly constant or increasing and then falls off beyond a break. We have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2006, 2004). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and H-alpha from ground-based observations, and 3.6 and 4.5 microns from Spitzer. In this talk, I will highlight results from a semi-automatic fitting of this data set, including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 41 dwarfs of the LITTLE THINGS subsample. We gratefully acknowledge funding for this research from the National Science Foundation (AST-0707563).

  8. Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion

    NASA Astrophysics Data System (ADS)

    Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.

    2017-12-01

    The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.

  9. Exponential growth and Gaussian—like fluctuations of solutions of stochastic differential equations with maximum functionals

    NASA Astrophysics Data System (ADS)

    Appleby, J. A. D.; Wu, H.

    2008-11-01

    In this paper we consider functional differential equations subjected to either instantaneous state-dependent noise, or to a white noise perturbation. The drift of the equations depend linearly on the current value and on the maximum of the solution. The functional term always provides positive feedback, while the instantaneous term can be mean-reverting or can exhibit positive feedback. We show in the white noise case that if the instantaneous term is mean reverting and dominates the history term, then solutions are recurrent, and upper bounds on the a.s. growth rate of the partial maxima of the solution can be found. When the instantaneous term is weaker, or is of positive feedback type, we determine necessary and sufficient conditions on the diffusion coefficient which ensure the exact exponential growth of solutions. An application of these results to an inefficient financial market populated by reference traders and speculators is given, in which the difference between the current instantaneous returns and maximum of the returns over the last few time units is used to determine trading strategies.

  10. Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.

    PubMed

    Popa, Călin-Adrian

    2018-06-08

    This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Novel use of UV broad-band excitation and stretched exponential function in the analysis of fluorescent dissolved organic matter: study of interaction between protein and humic-like components

    NASA Astrophysics Data System (ADS)

    Panigrahi, Suraj Kumar; Mishra, Ashok Kumar

    2017-09-01

    A combination of broad-band UV radiation (UV A and UV B; 250-400 nm) and a stretched exponential function (StrEF) has been utilised in efforts towards convenient and sensitive detection of fluorescent dissolved organic matter (FDOM). This approach enables accessing the gross fluorescence spectral signature of both protein-like and humic-like components in a single measurement. Commercial FDOM components are excited with the broad-band UV excitation; the variation of spectral profile as a function of varying component ratio is analysed. The underlying fluorescence dynamics and non-linear quenching of amino acid moieties are studied with the StrEF (exp(-V[Q] β )). The complex quenching pattern reflects the inner filter effect (IFE) as well as inter-component interactions. The inter-component interactions are essentially captured through the ‘sphere of action’ and ‘dark complex’ models. The broad-band UV excitation ascertains increased excitation energy, resulting in increased population density in the excited state and thereby resulting in enhanced sensitivity.

  12. Parameterized approximation of lacunarity functions derived from airborne laser scanning point clouds of forested areas

    NASA Astrophysics Data System (ADS)

    Székely, Balázs; Kania, Adam; Varga, Katalin; Heilmeier, Hermann

    2017-04-01

    Lacunarity, a measure of the spatial distribution of the empty space is found to be a useful descriptive quantity of the forest structure. Its calculation, based on laser-scanned point clouds, results in a four-dimensional data set. The evaluation of results needs sophisticated tools and visualization techniques. To simplify the evaluation, it is straightforward to use approximation functions fitted to the results. The lacunarity function L(r), being a measure of scale-independent structural properties, has a power-law character. Previous studies showed that log(log(L(r))) transformation is suitable for analysis of spatial patterns. Accordingly, transformed lacunarity functions can be approximated by appropriate functions either in the original or in the transformed domain. As input data we have used a number of laser-scanned point clouds of various forests. The lacunarity distribution has been calculated along a regular horizontal grid at various (relative) elevations. The lacunarity data cube then has been logarithm-transformed and the resulting values became the input of parameter estimation at each point (point of interest, POI). This way at each POI a parameter set is generated that is suitable for spatial analysis. The expectation is that the horizontal variation and vertical layering of the vegetation can be characterized by this procedure. The results show that the transformed L(r) functions can be typically approximated by exponentials individually, and the residual values remain low in most cases. However, (1) in most cases the residuals may vary considerably, and (2) neighbouring POIs often give rather differing estimates both in horizontal and in vertical directions, of them the vertical variation seems to be more characteristic. In the vertical sense, the distribution of estimates shows abrupt changes at places, presumably related to the vertical structure of the forest. In low relief areas horizontal similarity is more typical, in higher relief areas horizontal similarity fades out in short distances. Some of the input data have been acquired in the framework of the ChangeHabitats2 project financed by the European Union. BS contributed as an Alexander von Humboldt Research Fellow.

  13. A unified phase-field theory for the mechanics of damage and quasi-brittle failure

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying

    2017-06-01

    Being one of the most promising candidates for the modeling of localized failure in solids, so far the phase-field method has been applied only to brittle fracture with very few exceptions. In this work, a unified phase-field theory for the mechanics of damage and quasi-brittle failure is proposed within the framework of thermodynamics. Specifically, the crack phase-field and its gradient are introduced to regularize the sharp crack topology in a purely geometric context. The energy dissipation functional due to crack evolution and the stored energy functional of the bulk are characterized by a crack geometric function of polynomial type and an energetic degradation function of rational type, respectively. Standard arguments of thermodynamics then yield the macroscopic balance equation coupled with an extra evolution law of gradient type for the crack phase-field, governed by the aforesaid constitutive functions. The classical phase-field models for brittle fracture are recovered as particular examples. More importantly, the constitutive functions optimal for quasi-brittle failure are determined such that the proposed phase-field theory converges to a cohesive zone model for a vanishing length scale. Those general softening laws frequently adopted for quasi-brittle failure, e.g., linear, exponential, hyperbolic and Cornelissen et al. (1986) ones, etc., can be reproduced or fit with high precision. Except for the internal length scale, all the other model parameters can be determined from standard material properties (i.e., Young's modulus, failure strength, fracture energy and the target softening law). Some representative numerical examples are presented for the validation. It is found that both the internal length scale and the mesh size have little influences on the overall global responses, so long as the former can be well resolved by sufficiently fine mesh. In particular, for the benchmark tests of concrete the numerical results of load versus displacement curve and crack paths both agree well with the experimental data, showing validity of the proposed phase-field theory for the modeling of damage and quasi-brittle failure in solids.

  14. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Compression deformation behavior of Ti-6Al-4V alloy with cellular structures fabricated by electron beam melting.

    PubMed

    Cheng, X Y; Li, S J; Murr, L E; Zhang, Z B; Hao, Y L; Yang, R; Medina, F; Wicker, R B

    2012-12-01

    Ti-6Al-4V alloy with two kinds of open cellular structures of stochastic foam and reticulated mesh was fabricated by additive manufacturing (AM) using electron beam melting (EBM), and microstructure and mechanical properties of these samples with high porosity in the range of 62%∼92% were investigated. Optical observations found that the cell struts and ligaments consist of primary α' martensite. These cellular structures have comparable compressive strength (4∼113 MPa) and elastic modulus (0.2∼6.3 GPa) to those of trabecular and cortical bone. The regular mesh structures exhibit higher specific strength than other reported metallic foams under the condition of identical specific stiffness. During the compression, these EBM samples have a brittle response and undergo catastrophic failure after forming crush band at their peak loading. These bands have identical angle of ∼45° with compression axis for the regular reticulated meshes and such failure phenomenon was explained by considering the cell structure. Relative strength and density follow a linear relation as described by the well-known Gibson-Ashby model but its exponential factor is ∼2.2, which is relative higher than the idea value of 1.5 derived from the model. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Representing and computing regular languages on massively parallel networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.I.; O'Sullivan, J.A.; Boysam, B.

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less

  17. Subcritical Multiplicative Chaos for Regularized Counting Statistics from Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Lambert, Gaultier; Ostrovsky, Dmitry; Simm, Nick

    2018-05-01

    For an {N × N} Haar distributed random unitary matrix U N , we consider the random field defined by counting the number of eigenvalues of U N in a mesoscopic arc centered at the point u on the unit circle. We prove that after regularizing at a small scale {ɛN > 0}, the renormalized exponential of this field converges as N \\to ∞ to a Gaussian multiplicative chaos measure in the whole subcritical phase. We discuss implications of this result for obtaining a lower bound on the maximum of the field. We also show that the moments of the total mass converge to a Selberg-like integral and by taking a further limit as the size of the arc diverges, we establish part of the conjectures in Ostrovsky (Nonlinearity 29(2):426-464, 2016). By an analogous construction, we prove that the multiplicative chaos measure coming from the sine process has the same distribution, which strongly suggests that this limiting object should be universal. Our approach to the L 1-phase is based on a generalization of the construction in Berestycki (Electron Commun Probab 22(27):12, 2017) to random fields which are only asymptotically Gaussian. In particular, our method could have applications to other random fields coming from either random matrix theory or a different context.

  18. 5 CFR 847.607 - Methodology for determining the present value of annuity without service credit-credit needed for...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RETIREMENT COVERAGE BY CURRENT AND FORMER EMPLOYEES OF NONAPPROPRIATED FUND INSTRUMENTALITIES Additional... factor equal to the value of exponential function in which— (i) The base is one plus the assumed interest...

  19. 5 CFR 847.607 - Methodology for determining the present value of annuity without service credit-credit needed for...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RETIREMENT COVERAGE BY CURRENT AND FORMER EMPLOYEES OF NONAPPROPRIATED FUND INSTRUMENTALITIES Additional... factor equal to the value of exponential function in which— (i) The base is one plus the assumed interest...

  20. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

Top