The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Exponential propagators for the Schrödinger equation with a time-dependent potential.
Bader, Philipp; Blanes, Sergio; Kopylov, Nikita
2018-06-28
We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.
Effect of local minima on adiabatic quantum optimization.
Amin, M H S
2008-04-04
We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.
On exponential stability of linear Levin-Nohel integro-differential equations
NASA Astrophysics Data System (ADS)
Tien Dung, Nguyen
2015-02-01
The aim of this paper is to investigate the exponential stability for linear Levin-Nohel integro-differential equations with time-varying delays. To the best of our knowledge, the exponential stability for such equations has not yet been discussed. In addition, since we do not require that the kernel and delay are continuous, our results improve those obtained in Becker and Burton [Proc. R. Soc. Edinburgh, Sect. A: Math. 136, 245-275 (2006)]; Dung [J. Math. Phys. 54, 082705 (2013)]; and Jin and Luo [Comput. Math. Appl. 57(7), 1080-1088 (2009)].
Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.
Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M
2017-05-16
Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Non-extensive quantum statistics with particle-hole symmetry
NASA Astrophysics Data System (ADS)
Biró, T. S.; Shen, K. M.; Zhang, B. W.
2015-06-01
Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.
On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains
NASA Astrophysics Data System (ADS)
de Wekker, Stephan F. J.; Whiteman, C. David
2006-06-01
Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.
Exponential evolution: implications for intelligent extraterrestrial life.
Russell, D A
1983-01-01
Some measures of biologic complexity, including maximal levels of brain development, are exponential functions of time through intervals of 10(6) to 10(9) yrs. Biological interactions apparently stimulate evolution but physical conditions determine the time required to achieve a given level of complexity. Trends in brain evolution suggest that other organisms could attain human levels within approximately 10(7) yrs. The number (N) and longevity (L) terms in appropriate modifications of the Drake Equation, together with trends in the evolution of biological complexity on Earth, could provide rough estimates of the prevalence of life forms at specified levels of complexity within the Galaxy. If life occurs throughout the cosmos, exponential evolutionary processes imply that higher intelligence will soon (10(9) yrs) become more prevalent than it now is. Changes in the physical universe become less rapid as time increases from the Big Bang. Changes in biological complexity may be most rapid at such later times. This lends a unique and symmetrical importance to early and late universal times.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
On the parallel solution of parabolic equations
NASA Technical Reports Server (NTRS)
Gallopoulos, E.; Saad, Youcef
1989-01-01
Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.
Abrahamyan, Lusine; Li, Chuan Silvia; Beyene, Joseph; Willan, Andrew R; Feldman, Brian M
2011-03-01
The study evaluated the power of the randomized placebo-phase design (RPPD)-a new design of randomized clinical trials (RCTs), compared with the traditional parallel groups design, assuming various response time distributions. In the RPPD, at some point, all subjects receive the experimental therapy, and the exposure to placebo is for only a short fixed period of time. For the study, an object-oriented simulation program was written in R. The power of the simulated trials was evaluated using six scenarios, where the treatment response times followed the exponential, Weibull, or lognormal distributions. The median response time was assumed to be 355 days for the placebo and 42 days for the experimental drug. Based on the simulation results, the sample size requirements to achieve the same level of power were different under different response time to treatment distributions. The scenario where the response times followed the exponential distribution had the highest sample size requirement. In most scenarios, the parallel groups RCT had higher power compared with the RPPD. The sample size requirement varies depending on the underlying hazard distribution. The RPPD requires more subjects to achieve a similar power to the parallel groups design. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Umari, A.M.; Gorelick, S.M.
1986-01-01
It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Compact continuous-variable entanglement distillation.
Datta, Animesh; Zhang, Lijian; Nunn, Joshua; Langford, Nathan K; Feito, Alvaro; Plenio, Martin B; Walmsley, Ian A
2012-02-10
We introduce a new scheme for continuous-variable entanglement distillation that requires only linear temporal and constant physical or spatial resources. Distillation is the process by which high-quality entanglement may be distributed between distant nodes of a network in the unavoidable presence of decoherence. The known versions of this protocol scale exponentially in space and doubly exponentially in time. Our optimal scheme therefore provides exponential improvements over existing protocols. It uses a fixed-resource module-an entanglement distillery-comprising only four quantum memories of at most 50% storage efficiency and allowing a feasible experimental implementation. Tangible quantum advantages are obtainable by using existing off-resonant Raman quantum memories outside their conventional role of storage.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Analysis of non-destructive current simulators of flux compression generators.
O'Connor, K A; Curry, R D
2014-06-01
Development and evaluation of power conditioning systems and high power microwave components often used with flux compression generators (FCGs) requires repeated testing and characterization. In an effort to minimize the cost and time required for testing with explosive generators, non-destructive simulators of an FCG's output current have been developed. Flux compression generators and simulators of FCGs are unique pulsed power sources in that the current waveform exhibits a quasi-exponential increasing rate at which the current rises. Accurately reproducing the quasi-exponential current waveform of a FCG can be important in designing electroexplosive opening switches and other power conditioning components that are dependent on the integral of current action and the rate of energy dissipation. Three versions of FCG simulators have been developed that include an inductive network with decreasing impedance in time. A primary difference between these simulators is the voltage source driving them. It is shown that a capacitor-inductor-capacitor network driving a constant or decreasing inductive load can produce the desired high-order derivatives of the load current to replicate a quasi-exponential waveform. The operation of the FCG simulators is reviewed and described mathematically for the first time to aid in the design of new simulators. Experimental and calculated results of two recent simulators are reported with recommendations for future designs.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
New exponential stability criteria for stochastic BAM neural networks with impulses
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Samidurai, R.; Anthoni, S. M.
2010-10-01
In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.
Time Horizons, Discounting, and Intertemporal Choice
ERIC Educational Resources Information Center
Streich, Philip; Levy, Jack S.
2007-01-01
Although many decisions involve a stream of payoffs over time, political scientists have given little attention to how actors make the required tradeoffs between present and future payoffs, other than applying the standard exponential discounting model from economics. After summarizing the basic discounting model, we identify some of its leading…
A space-efficient quantum computer simulator suitable for high-speed FPGA implementation
NASA Astrophysics Data System (ADS)
Frank, Michael P.; Oniciuc, Liviu; Meyer-Baese, Uwe H.; Chiorescu, Irinel
2009-05-01
Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we describe the design and empirical space/time complexity measurements of a working software prototype of a quantum computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main memory. We plan to prototype our design on a standard FPGA development board.
Quantum Support Vector Machine for Big Data Classification
NASA Astrophysics Data System (ADS)
Rebentrost, Patrick; Mohseni, Masoud; Lloyd, Seth
2014-09-01
Supervised machine learning is the classification of new data based on already classified training examples. In this work, we show that the support vector machine, an optimized binary classifier, can be implemented on a quantum computer, with complexity logarithmic in the size of the vectors and the number of training examples. In cases where classical sampling algorithms require polynomial time, an exponential speedup is obtained. At the core of this quantum big data algorithm is a nonsparse matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product (kernel) matrix.
An exact formulation of the time-ordered exponential using path-sums
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giscard, P.-L., E-mail: p.giscard1@physics.ox.ac.uk; Lui, K.; Thwaite, S. J.
2015-05-15
We present the path-sum formulation for the time-ordered exponential of a time-dependent matrix. The path-sum formulation gives the time-ordered exponential as a branched continued fraction of finite depth and breadth. The terms of the path-sum have an elementary interpretation as self-avoiding walks and self-avoiding polygons on a graph. Our result is based on a representation of the time-ordered exponential as the inverse of an operator, the mapping of this inverse to sums of walks on a graphs, and the algebraic structure of sets of walks. We give examples demonstrating our approach. We establish a super-exponential decay bound for the magnitudemore » of the entries of the time-ordered exponential of sparse matrices. We give explicit results for matrices with commonly encountered sparse structures.« less
The true quantum face of the "exponential" decay: Unstable systems in rest and in motion
NASA Astrophysics Data System (ADS)
Urbanowski, K.
2017-12-01
Results of theoretical studies and numerical calculations presented in the literature suggest that the survival probability P0(t) has the exponential form starting from times much smaller than the lifetime τ up to times t ⪢τ and that P0(t) exhibits inverse power-law behavior at the late time region for times longer than the so-called crossover time T ⪢ τ (The crossover time T is the time when the late time deviations of P0(t) from the exponential form begin to dominate). More detailed analysis of the problem shows that in fact the survival probability P0(t) can not take the pure exponential form at any time interval including times smaller than the lifetime τ or of the order of τ and it has has an oscillating form. We also study the survival probability of moving relativistic unstable particles with definite momentum . These studies show that late time deviations of the survival probability of these particles from the exponential-like form of the decay law, that is the transition times region between exponential-like and non-exponential form of the survival probability, should occur much earlier than it follows from the classical standard considerations.
Properties of single NMDA receptor channels in human dentate gyrus granule cells
Lieberman, David N; Mody, Istvan
1999-01-01
Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689
Choi, Hyun Duck; Ahn, Choon Ki; Karimi, Hamid Reza; Lim, Myo Taeg
2017-10-01
This paper studies delay-dependent exponential dissipative and l 2 - l ∞ filtering problems for discrete-time switched neural networks (DSNNs) including time-delayed states. By introducing a novel discrete-time inequality, which is a discrete-time version of the continuous-time Wirtinger-type inequality, we establish new sets of linear matrix inequality (LMI) criteria such that discrete-time filtering error systems are exponentially stable with guaranteed performances in the exponential dissipative and l 2 - l ∞ senses. The design of the desired exponential dissipative and l 2 - l ∞ filters for DSNNs can be achieved by solving the proposed sets of LMI conditions. Via numerical simulation results, we show the validity of the desired discrete-time filter design approach.
Rounded stretched exponential for time relaxation functions.
Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B
2009-12-07
A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
Crypto-Unitary Forms of Quantum Evolution Operators
NASA Astrophysics Data System (ADS)
Znojil, Miloslav
2013-06-01
The description of quantum evolution using unitary operator {u}(t)=exp(-i{h}t) requires that the underlying self-adjoint quantum Hamiltonian {h} remains time-independent. In a way extending the so called {PT}-symmetric quantum mechanics to the models with manifestly time-dependent "charge" {C}(t) we propose and describe an extension of such an exponential-operator approach to evolution to the manifestly time-dependent self-adjoint quantum Hamiltonians {h}(t).
A Long-Lived Oscillatory Space-Time Correlation Function of Two Dimensional Colloids
NASA Astrophysics Data System (ADS)
Kim, Jeongmin; Sung, Bong June
2014-03-01
Diffusion of a colloid in solution has drawn significant attention for a century. A well-known behavior of the colloid is called Brownian motion : the particle displacement probability distribution (PDPD) is Gaussian and the mean-square displacement (MSD) is linear with time. However, recent simulation and experimental studies revealed the heterogeneous dynamics of colloids near glass transitions or in complex environments such as entangled actin, PDPD exhibited the exponential tail at a large length instead of being Gaussian at all length scales. More interestingly, PDPD is still exponential even when MSD was still linear with time. It requires a refreshing insight on the colloidal diffusion in the complex environments. In this work, we study heterogeneous dynamics of two dimensional (2D) colloids using molecular dynamics simulations. Unlike in three dimensions, 2D solids do not follow the Lindemann melting criterion. The Kosterlitz-Thouless-Halperin-Nelson-Young theory predicts two-step phase transitions with an intermediate phase, the hexatic phase between isotropic liquids and solids. Near solid-hexatic transition, PDPD shows interesting oscillatory behavior between a central Gaussian part and an exponential tail. Until 12 times longer than translational relaxation time, the oscillatory behavior still persists even after entering the Fickian regime. We also show that multi-layered kinetic clusters account for heterogeneous dynamics of 2D colloids with the long-lived anomalous oscillatory PDPD.
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
Lee, Hyung-Min; Howell, Bryan; Grill, Warren M; Ghovanloo, Maysam
2018-05-01
The purpose of this study was to test the feasibility of using a switched-capacitor discharge stimulation (SCDS) system for electrical stimulation, and, subsequently, determine the overall energy saved compared to a conventional stimulator. We have constructed a computational model by pairing an image-based volume conductor model of the cat head with cable models of corticospinal tract (CST) axons and quantified the theoretical stimulation efficiency of rectangular and decaying exponential waveforms, produced by conventional and SCDS systems, respectively. Subsequently, the model predictions were tested in vivo by activating axons in the posterior internal capsule and recording evoked electromyography (EMG) in the contralateral upper arm muscles. Compared to rectangular waveforms, decaying exponential waveforms with time constants >500 μs were predicted to require 2%-4% less stimulus energy to activate directly models of CST axons and 0.4%-2% less stimulus energy to evoke EMG activity in vivo. Using the calculated wireless input energy of the stimulation system and the measured stimulus energies required to evoke EMG activity, we predict that an SCDS implantable pulse generator (IPG) will require 40% less input energy than a conventional IPG to activate target neural elements. A wireless SCDS IPG that is more energy efficient than a conventional IPG will reduce the size of an implant, require that less wireless energy be transmitted through the skin, and extend the lifetime of the battery in the external power transmitter.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
NASA Astrophysics Data System (ADS)
Dhariwal, Rohit; Bragg, Andrew D.
2018-03-01
In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.
Improved Results for Route Planning in Stochastic Transportation Networks
NASA Technical Reports Server (NTRS)
Boyan, Justin; Mitzenmacher, Michael
2000-01-01
In the bus network problem, the goal is to generate a plan for getting from point X to point Y within a city using buses in the smallest expected time. Because bus arrival times are not determined by a fixed schedule but instead may be random. the problem requires more than standard shortest path techniques. In recent work, Datar and Ranade provide algorithms in the case where bus arrivals are assumed to be independent and exponentially distributed. We offer solutions to two important generalizations of the problem, answering open questions posed by Datar and Ranade. First, we provide a polynomial time algorithm for a much wider class of arrival distributions, namely those with increasing failure rate. This class includes not only exponential distributions but also uniform, normal, and gamma distributions. Second, in the case where bus arrival times are independent and geometric discrete random variable,. we provide an algorithm for transportation networks of buses and trains, where trains run according to a fixed schedule.
Fast self contained exponential random deviate algorithm
NASA Astrophysics Data System (ADS)
Fernández, Julio F.
1997-03-01
An algorithm that generates random numbers with an exponential distribution and is about ten times faster than other well known algorithms has been reported before (J. F. Fernández and J. Rivero, Comput. Phys. 10), 83 (1996). That algorithm requires input of uniform random deviates. We now report a new version of it that needs no input and is nearly as fast. The only limitation we predict thus far for the quality of the output is the amount of computer memory available. Performance results under various tests will be reported. The algorithm works in close analogy to the set up that is often used in statistical physics in order to obtain the Gibb's distribution. N numbers, that are are stored in N registers, change with time according to the rules of the algorithm, keeping their sum constant. Further details will be given.
Thermally induced spin rate ripple on spacecraft with long radial appendages
NASA Technical Reports Server (NTRS)
Fedor, J. V.
1983-01-01
A thermally induced spin rate ripple hypothesis is proposed to explain the spin rate anomaly observed on ISEE-B. It involves the two radial 14.5 meter beryllium copper tape ribbons going in and out of the spacecraft hub shadow. A thermal lag time constant is applied to the thermally induced ribbon displacements which perturb the spin rate. It is inferred that the averaged thermally induced ribbon displacements are coupled to the ribbon angular motion. A possible exponential build up of the inplane motion of the ribbon which in turn causes the spin rate ripple, ultimately limited by damping in the ribbon and spacecraft is shown. It is indicated that qualitative increase in the oscillation period and the thermal lag is fundamental for the period increase. found that numerical parameter values required to agree with in orbit initial exponential build up are reasonable; those required for the ripple period are somewhat extreme.
Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried
2013-10-01
To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.
Entangled Dynamics in Macroscopic Quantum Tunneling of Bose-Einstein Condensates
NASA Astrophysics Data System (ADS)
Alcala, Diego A.; Glick, Joseph A.; Carr, Lincoln D.
2017-05-01
Tunneling of a quasibound state is a nonsmooth process in the entangled many-body case. Using time-evolving block decimation, we show that repulsive (attractive) interactions speed up (slow down) tunneling. While the escape time scales exponentially with small interactions, the maximization time of the von Neumann entanglement entropy between the remaining quasibound and escaped atoms scales quadratically. Stronger interactions require higher-order corrections. Entanglement entropy is maximized when about half the atoms have escaped.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
Giovannetti, Vittorio; Lloyd, Seth; Maccone, Lorenzo
2008-04-25
A random access memory (RAM) uses n bits to randomly address N=2(n) distinct memory cells. A quantum random access memory (QRAM) uses n qubits to address any quantum superposition of N memory cells. We present an architecture that exponentially reduces the requirements for a memory call: O(logN) switches need be thrown instead of the N used in conventional (classical or quantum) RAM designs. This yields a more robust QRAM algorithm, as it in general requires entanglement among exponentially less gates, and leads to an exponential decrease in the power needed for addressing. A quantum optical implementation is presented.
Makri, Nancy
2014-10-07
The real-time path integral representation of the reduced density matrix for a discrete system in contact with a dissipative medium is rewritten in terms of the number of blips, i.e., elementary time intervals over which the forward and backward paths are not identical. For a given set of blips, it is shown that the path sum with respect to the coordinates of all remaining time points is isomorphic to that for the wavefunction of a system subject to an external driving term and thus can be summed by an inexpensive iterative procedure. This exact decomposition reduces the number of terms by a factor that increases exponentially with propagation time. Further, under conditions (moderately high temperature and/or dissipation strength) that lead primarily to incoherent dynamics, the "fully incoherent limit" zero-blip term of the series provides a reasonable approximation to the dynamics, and the blip series converges rapidly to the exact result. Retention of only the blips required for satisfactory convergence leads to speedup of full-memory path integral calculations by many orders of magnitude.
Mercury BLASTP: Accelerating Protein Sequence Alignment
Jacob, Arpith; Lancaster, Joseph; Buhler, Jeremy; Harris, Brandon; Chamberlain, Roger D.
2008-01-01
Large-scale protein sequence comparison is an important but compute-intensive task in molecular biology. BLASTP is the most popular tool for comparative analysis of protein sequences. In recent years, an exponential increase in the size of protein sequence databases has required either exponentially more running time or a cluster of machines to keep pace. To address this problem, we have designed and built a high-performance FPGA-accelerated version of BLASTP, Mercury BLASTP. In this paper, we describe the architecture of the portions of the application that are accelerated in the FPGA, and we also describe the integration of these FPGA-accelerated portions with the existing BLASTP software. We have implemented Mercury BLASTP on a commodity workstation with two Xilinx Virtex-II 6000 FPGAs. We show that the new design runs 11-15 times faster than software BLASTP on a modern CPU while delivering close to 99% identical results. PMID:19492068
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
Stuebner, Michael; Haider, Mansoor A
2010-06-18
A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Ellis, Amy B.; Ozgur, Zekiye; Kulow, Torrey; Dogan, Muhammed F.; Amidon, Joel
2016-01-01
This article presents an Exponential Growth Learning Trajectory (EGLT), a trajectory identifying and characterizing middle grade students' initial and developing understanding of exponential growth as a result of an instructional emphasis on covariation. The EGLT explicates students' thinking and learning over time in relation to a set of tasks…
Compact exponential product formulas and operator functional derivative
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Quark mixing and exponential form of the Cabibbo-Kobayashi-Maskawa matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukovsky, K. V., E-mail: zhukovsk@phys.msu.ru; Dattoli, D., E-mail: dattoli@frascati.enea.i
2008-10-15
Various forms of representation of the mixing matrix are discussed. An exponential parametrization e{sup A} of the Cabibbo-Kobayashi-Maskawa matrix is considered in the context of the unitarity requirement, this parametrization being the most general form of the mixing matrix. An explicit representation for the exponential mixing matrix in terms of the first and second degrees of the matrix A exclusively is obtained. This representation makes it possible to calculate this exponential mixing matrix readily in any order of the expansion in the small parameter {lambda}. The generation of new unitary parametric representations of the mixing matrix with the aid ofmore » the exponential matrix is demonstrated.« less
Asymptotic stability estimates near an equilibrium point
NASA Astrophysics Data System (ADS)
Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2017-07-01
We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.
Galland, Paul
2002-09-01
The quantitative relation between gravitropism and phototropism was analyzed for light-grown coleoptiles of Avena sativa (L.). With respect to gravitropism the coleoptiles obeyed the sine law. To study the interaction between light and gravity, coleoptiles were inclined at variable angles and irradiated for 7 h with unilateral blue light (466 nm) impinging at right angles relative to the axis of the coleoptile. The phototropic stimulus was applied from the side opposite to the direction of gravitropic bending. The fluence rate that was required to counteract the negative gravitropism increased exponentially with the sine of the inclination angle. To achieve balance, a linear increase in the gravitropic stimulus required compensation by an exponential increase in the counteracting phototropic stimulus. The establishment of photogravitropic equilibrium during continuous unilateral irradiation is thus determined by two different laws: the well-known sine law for gravitropism and a novel exponential law for phototropism described in this work.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Scalability, Complexity and Reliability in Quantum Information Processing
2007-03-01
hidden subgroup framework to abelian groups which are not finitely generated. An extension of the basic algorithm breaks the Buchmann-Williams...finding short lattice vectors . In [2], we showed that the generalization of the standard method --- random coset state preparation followed by fourier...sampling --- required exponential time for sufficiently non-abelian groups including the symmetric group , at least when the fourier transforms are
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
Compact exponential product formulas and operator functional derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, M.
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
NASA Astrophysics Data System (ADS)
Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In
2017-08-01
In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.
Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals
2014-08-01
that the required condition of Corollary 9, namely q ≥ Cδ−4s̃ log(n/s̃), is still satisfied. The result follows from massaging the equations, as...study of the relationship of heart attacks to various factors may test whether certain subjects have heart attacks in a short window of time and other...subjects have heart attacks in a long window of time. The main message of this paper is that by carefully choosing this threshold the accuracy of
Exponentially decaying interaction potential of cavity solitons
NASA Astrophysics Data System (ADS)
Anbardan, Shayesteh Rahmani; Rimoldi, Cristina; Kheradmand, Reza; Tissoni, Giovanna; Prati, Franco
2018-03-01
We analyze the interaction of two cavity solitons in an optically injected vertical cavity surface emitting laser above threshold. We show that they experience an attractive force even when their distance is much larger than their diameter, and eventually they merge. Since the merging time depends exponentially on the initial distance, we suggest that the attraction could be associated with an exponentially decaying interaction potential, similarly to what is found for hydrophobic materials. We also show that the merging time is simply related to the characteristic times of the laser, photon lifetime, and carrier lifetime.
Zhang, Guodong; Zeng, Zhigang; Hu, Junhao
2018-01-01
This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Teaching Exponential Growth and Decay: Examples from Medicine
ERIC Educational Resources Information Center
Hobbie, Russell K.
1973-01-01
A treatment of exponential growth and decay is sketched which does not require knowledge of calculus, and hence, it can be applied to many cases in the biological and medical sciences. Some examples are bacterial growth, sterilization, clearance, and drug absorption. (DF)
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Time-ordered exponential on the complex plane and Gell-Mann—Low formula as a mathematical theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futakuchi, Shinichiro; Usui, Kouta
2016-04-15
The time-ordered exponential representation of a complex time evolution operator in the interaction picture is studied. Using the complex time evolution, we prove the Gell-Mann—Low formula under certain abstract conditions, in mathematically rigorous manner. We apply the abstract results to quantum electrodynamics with cutoffs.
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
Optimal savings and the value of population.
Arrow, Kenneth J; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P
2007-11-20
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium.
Optimal savings and the value of population
Arrow, Kenneth J.; Bensoussan, Alain; Feng, Qi; Sethi, Suresh P.
2007-01-01
We study a model of economic growth in which an exogenously changing population enters in the objective function under total utilitarianism and into the state dynamics as the labor input to the production function. We consider an arbitrary population growth until it reaches a critical level (resp. saturation level) at which point it starts growing exponentially (resp. it stops growing altogether). This requires population as well as capital as state variables. By letting the population variable serve as the surrogate of time, we are still able to depict the optimal path and its convergence to the long-run equilibrium on a two-dimensional phase diagram. The phase diagram consists of a transient curve that reaches the classical curve associated with a positive exponential growth at the time the population reaches the critical level. In the case of an asymptotic population saturation, we expect the transient curve to approach the equilibrium as the population approaches its saturation level. Finally, we characterize the approaches to the classical curve and to the equilibrium. PMID:17984059
First Demonstration of Electrostatic Damping of Parametric Instability at Advanced LIGO
NASA Astrophysics Data System (ADS)
Blair, Carl; Gras, Slawek; Abbott, Richard; Aston, Stuart; Betzwieser, Joseph; Blair, David; DeRosa, Ryan; Evans, Matthew; Frolov, Valera; Fritschel, Peter; Grote, Hartmut; Hardwick, Terra; Liu, Jian; Lormand, Marc; Miller, John; Mullavey, Adam; O'Reilly, Brian; Zhao, Chunnong; Abbott, B. P.; Abbott, T. D.; Adams, C.; Adhikari, R. X.; Anderson, S. B.; Ananyeva, A.; Appert, S.; Arai, K.; Ballmer, S. W.; Barker, D.; Barr, B.; Barsotti, L.; Bartlett, J.; Bartos, I.; Batch, J. C.; Bell, A. S.; Billingsley, G.; Birch, J.; Biscans, S.; Biwer, C.; Bork, R.; Brooks, A. F.; Ciani, G.; Clara, F.; Countryman, S. T.; Cowart, M. J.; Coyne, D. C.; Cumming, A.; Cunningham, L.; Danzmann, K.; Da Silva Costa, C. F.; Daw, E. J.; DeBra, D.; DeSalvo, R.; Dooley, K. L.; Doravari, S.; Driggers, J. C.; Dwyer, S. E.; Effler, A.; Etzel, T.; Evans, T. M.; Factourovich, M.; Fair, H.; Fernández Galiana, A.; Fisher, R. P.; Fulda, P.; Fyffe, M.; Giaime, J. A.; Giardina, K. D.; Goetz, E.; Goetz, R.; Gray, C.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, E. D.; Hammond, G.; Hanks, J.; Hanson, J.; Harry, G. M.; Heintze, M. C.; Heptonstall, A. W.; Hough, J.; Izumi, K.; Jones, R.; Kandhasamy, S.; Karki, S.; Kasprzack, M.; Kaufer, S.; Kawabe, K.; Kijbunchoo, N.; King, E. J.; King, P. J.; Kissel, J. S.; Korth, W. Z.; Kuehn, G.; Landry, M.; Lantz, B.; Lockerbie, N. A.; Lundgren, A. P.; MacInnis, M.; Macleod, D. M.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martin, I. W.; Martynov, D. V.; Mason, K.; Massinger, T. J.; Matichard, F.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McIntyre, G.; McIver, J.; Mendell, G.; Merilh, E. L.; Meyers, P. M.; Mittleman, R.; Moreno, G.; Mueller, G.; Munch, J.; Nuttall, L. K.; Oberling, J.; Oppermann, P.; Oram, Richard J.; Ottaway, D. J.; Overmier, H.; Palamos, J. R.; Paris, H. R.; Parker, W.; Pele, A.; Penn, S.; Phelps, M.; Pierro, V.; Pinto, I.; Principe, M.; Prokhorov, L. G.; Puncken, O.; Quetschke, V.; Quintero, E. A.; Raab, F. J.; Radkins, H.; Raffai, P.; Reid, S.; Reitze, D. H.; Robertson, N. A.; Rollins, J. G.; Roma, V. J.; Romie, J. H.; Rowan, S.; Ryan, K.; Sadecki, T.; Sanchez, E. J.; Sandberg, V.; Savage, R. L.; Schofield, R. M. S.; Sellers, D.; Shaddock, D. A.; Shaffer, T. J.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sigg, D.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Sorazu, B.; Staley, A.; Strain, K. A.; Tanner, D. B.; Taylor, R.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Torrie, C. I.; Traylor, G.; Vajente, G.; Valdes, G.; van Veggel, A. A.; Vecchio, A.; Veitch, P. J.; Venkateswara, K.; Vo, T.; Vorvick, C.; Walker, M.; Ward, R. L.; Warner, J.; Weaver, B.; Weiss, R.; Weßels, P.; Willke, B.; Wipf, C. C.; Worden, J.; Wu, G.; Yamamoto, H.; Yancey, C. C.; Yu, Hang; Yu, Haocun; Zhang, L.; Zucker, M. E.; Zweizig, J.; LSC Instrument Authors
2017-04-01
Interferometric gravitational wave detectors operate with high optical power in their arms in order to achieve high shot-noise limited strain sensitivity. A significant limitation to increasing the optical power is the phenomenon of three-mode parametric instabilities, in which the laser field in the arm cavities is scattered into higher-order optical modes by acoustic modes of the cavity mirrors. The optical modes can further drive the acoustic modes via radiation pressure, potentially producing an exponential buildup. One proposed technique to stabilize parametric instability is active damping of acoustic modes. We report here the first demonstration of damping a parametrically unstable mode using active feedback forces on the cavity mirror. A 15 538 Hz mode that grew exponentially with a time constant of 182 sec was damped using electrostatic actuation, with a resulting decay time constant of 23 sec. An average control force of 0.03 nN was required to maintain the acoustic mode at its minimum amplitude.
In situ passivation of GaAsP nanowires.
Himwas, C; Collin, S; Rale, P; Chauvin, N; Patriarche, G; Oehler, F; Julien, F H; Travers, L; Harmand, J-C; Tchernycheva, M
2017-12-08
We report on the structural and optical properties of GaAsP nanowires (NWs) grown by molecular-beam epitaxy. By adjusting the alloy composition in the NWs, the transition energy was tuned to the optimal value required for tandem III-V/silicon solar cells. We discovered that an unintentional shell was also formed during the GaAsP NW growth. The NW surface was passivated by an in situ deposition of a radial Ga(As)P shell. Different shell compositions and thicknesses were investigated. We demonstrate that the optimal passivation conditions for GaAsP NWs (with a gap of 1.78 eV) are obtained with a 5 nm thick GaP shell. This passivation enhances the luminescence intensity of the NWs by 2 orders of magnitude and yields a longer luminescence decay. The luminescence dynamics changes from single exponential decay with a 4 ps characteristic time in non-passivated NWs to a bi-exponential decay with characteristic times of 85 and 540 ps in NWs with GaP shell passivation.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung
2007-07-01
This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.
Experimental quantum computing to solve systems of linear equations.
Cai, X-D; Weedbrook, C; Su, Z-E; Chen, M-C; Gu, Mile; Zhu, M-J; Li, Li; Liu, Nai-Le; Lu, Chao-Yang; Pan, Jian-Wei
2013-06-07
Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such a task can be intractable for classical computers, as the best known classical algorithms require a time proportional to the number of variables N. A recently proposed quantum algorithm shows that quantum computers could solve linear systems in a time scale of order log(N), giving an exponential speedup over classical computers. Here we realize the simplest instance of this algorithm, solving 2×2 linear equations for various input vectors on a quantum computer. We use four quantum bits and four controlled logic gates to implement every subroutine required, demonstrating the working principle of this algorithm.
NASA Astrophysics Data System (ADS)
Ma, Shuo; Kang, Yanmei
2018-04-01
In this paper, the exponential synchronization of stochastic neutral-type neural networks with time-varying delay and Lévy noise under non-Lipschitz condition is investigated for the first time. Using the general Itô's formula and the nonnegative semi-martingale convergence theorem, we derive general sufficient conditions of two kinds of exponential synchronization for the drive system and the response system with adaptive control. Numerical examples are presented to verify the effectiveness of the proposed criteria.
How required reserve ratio affects distribution and velocity of money
NASA Astrophysics Data System (ADS)
Xi, Ning; Ding, Ning; Wang, Yougui
2005-11-01
In this paper the dependence of wealth distribution and the velocity of money on the required reserve ratio is examined based on a random transfer model of money and computer simulations. A fractional reserve banking system is introduced to the model where money creation can be achieved by bank loans and the monetary aggregate is determined by the monetary base and the required reserve ratio. It is shown that monetary wealth follows asymmetric Laplace distribution and latency time of money follows exponential distribution. The expression of monetary wealth distribution and that of the velocity of money in terms of the required reserve ratio are presented in a good agreement with simulation results.
Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg
2011-04-20
Following the report of a single-exponential activation behavior behind the super-Arrhenius structural relaxation of glass-forming liquids in our preceding paper, we find that the non-exponentiality in the structural relaxation of glass-forming liquids is straightforwardly determined by the relaxation time, and could be calculated from the measured relaxation data. Comparisons between the calculated and measured non-exponentialities for typical glass-forming liquids, from fragile to intermediate, convincingly support the present analysis. Hence the origin of the non-exponentiality and its correlation with liquid fragility become clearer.
Slow Crack Growth of Brittle Materials With Exponential Crack-Velocity Formulation. Part 1; Analysis
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
Extensive slow-crack-growth (SCG) analysis was made using a primary exponential crack-velocity formulation under three widely used load configurations: constant stress rate, constant stress, and cyclic stress. Although the use of the exponential formulation in determining SCG parameters of a material requires somewhat inconvenient numerical procedures, the resulting solutions presented gave almost the same degree of simplicity in both data analysis and experiments as did the power-law formulation. However, the fact that the inert strength of a material should be known in advance to determine the corresponding SCG parameters was a major drawback of the exponential formulation as compared with the power-law formulation.
Exploring Exponential Decay Using Limited Resources
ERIC Educational Resources Information Center
DePierro, Ed; Garafalo, Fred; Gordon, Patrick
2018-01-01
Science students need exposure to activities that will help them to become familiar with phenomena exhibiting exponential decay. This paper describes an experiment that allows students to determine the rate of thermal energy loss by a hot object to its surroundings. It requires limited equipment, is safe, and gives reasonable results. Students…
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
Parameter Transient Behavior Analysis on Fault Tolerant Control System
NASA Technical Reports Server (NTRS)
Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob
2003-01-01
In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
The photoluminescence of a fluorescent lamp: didactic experiments on the exponential decay
NASA Astrophysics Data System (ADS)
Onorato, Pasquale; Gratton, Luigi; Malgieri, Massimiliano; Oss, Stefano
2017-01-01
The lifetimes of the photoluminescent compounds contained in the coating of fluorescent compact lamps are usually measured using specialised instruments, including pulsed lasers and/or spectrofluorometers. Here we discuss how some low cost apparatuses, based on the use of either sensors for the educational lab or commercial digital photo cameras, can be employed to the same aim. The experiments do not require that luminescent phosphors are hazardously extracted from the compact fluorescent lamp, that also contains mercury. We obtain lifetime measurements for specific fluorescent elements of the bulb coating, in good agreement with the known values. We also address the physical mechanisms on which fluorescence lamps are based in a simplified way, suitable for undergraduate students; and we discuss in detail the physics of the lamp switch-off by analysing the time dependent spectrum, measured through a commercial fiber-optic spectrometer. Since the experiment is not hazardous in any way, requires a simple setup up with instruments which are commonly found in educational labs, and focuses on the typical features of the exponential decay, it is suitable for being performed in the undergraduate laboratory.
A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems
NASA Astrophysics Data System (ADS)
Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.
2010-09-01
We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.
Cai, Zuowei; Huang, Lihong; Zhang, Lingling
2015-05-01
This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lengline, O.; Marsan, D.; Got, J.; Pinel, V.
2007-12-01
The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
On the Time-Dependent Analysis of Gamow Decay
ERIC Educational Resources Information Center
Durr, Detlef; Grummt, Robert; Kolb, Martin
2011-01-01
Gamow's explanation of the exponential decay law uses complex "eigenvalues" and exponentially growing "eigenfunctions". This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any…
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Simple robust control laws for robot manipulators. Part 1: Non-adaptive case
NASA Technical Reports Server (NTRS)
Wen, J. T.; Bayard, D. S.
1987-01-01
A new class of exponentially stabilizing control laws for joint level control of robot arms is introduced. It has been recently recognized that the nonlinear dynamics associated with robotic manipulators have certain inherent passivity properties. More specifically, the derivation of the robotic dynamic equations from the Hamilton's principle gives rise to natural Lyapunov functions for control design based on total energy considerations. Through a slight modification of the energy Lyapunov function and the use of a convenient lemma to handle third order terms in the Lyapunov function derivatives, closed loop exponential stability for both the set point and tracking control problem is demonstrated. The exponential convergence property also leads to robustness with respect to frictions, bounded modeling errors and instrument noise. In one new design, the nonlinear terms are decoupled from real-time measurements which completely removes the requirement for on-line computation of nonlinear terms in the controller implementation. In general, the new class of control laws offers alternatives to the more conventional computed torque method, providing tradeoffs between robustness, computation and convergence properties. Furthermore, these control laws have the unique feature that they can be adapted in a very simple fashion to achieve asymptotically stable adaptive control.
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
KAM tori and whiskered invariant tori for non-autonomous systems
NASA Astrophysics Data System (ADS)
Canadell, Marta; de la Llave, Rafael
2015-08-01
We consider non-autonomous dynamical systems which converge to autonomous (or periodic) systems exponentially fast in time. Such systems appear naturally as models of many physical processes affected by external pulses. We introduce definitions of non-autonomous invariant tori and non-autonomous whiskered tori and their invariant manifolds and we prove their persistence under small perturbations, smooth dependence on parameters and several geometric properties (if the systems are Hamiltonian, the tori are Lagrangian manifolds). We note that such definitions are problematic for general time-dependent systems, but we show that they are unambiguous for systems converging exponentially fast to autonomous. The proof of persistence relies only on a standard Implicit Function Theorem in Banach spaces and it does not require that the rotations in the tori are Diophantine nor that the systems we consider preserve any geometric structure. We only require that the autonomous system preserves these objects. In particular, when the autonomous system is integrable, we obtain the persistence of tori with rational rotational. We also discuss fast and efficient algorithms for their computation. The method also applies to infinite dimensional systems which define a good evolution, e.g. PDE's. When the systems considered are Hamiltonian, we show that the time dependent invariant tori are isotropic. Hence, the invariant tori of maximal dimension are Lagrangian manifolds. We also obtain that the (un)stable manifolds of whiskered tori are Lagrangian manifolds. We also include a comparison with the more global theory developed in Blazevski and de la Llave (2011).
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Song, Qiankun
2006-07-01
In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Li, Kelin
2010-02-01
In this article, a class of impulsive bidirectional associative memory (BAM) fuzzy cellular neural networks (FCNNs) with time-varying delays is formulated and investigated. By employing delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM FCNNs with time-varying delays are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM FCNNs. An example is given to show the effectiveness of the results obtained here.
Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian
2012-08-01
We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Zooming in on vibronic structure by lowest-value projection reconstructed 4D coherent spectroscopy
NASA Astrophysics Data System (ADS)
Harel, Elad
2018-05-01
A fundamental goal of chemical physics is an understanding of microscopic interactions in liquids at and away from equilibrium. In principle, this microscopic information is accessible by high-order and high-dimensionality nonlinear optical measurements. Unfortunately, the time required to execute such experiments increases exponentially with the dimensionality, while the signal decreases exponentially with the order of the nonlinearity. Recently, we demonstrated a non-uniform acquisition method based on radial sampling of the time-domain signal [W. O. Hutson et al., J. Phys. Chem. Lett. 9, 1034 (2018)]. The four-dimensional spectrum was then reconstructed by filtered back-projection using an inverse Radon transform. Here, we demonstrate an alternative reconstruction method based on the statistical analysis of different back-projected spectra which results in a dramatic increase in sensitivity and at least a 100-fold increase in dynamic range compared to conventional uniform sampling and Fourier reconstruction. These results demonstrate that alternative sampling and reconstruction methods enable applications of increasingly high-order and high-dimensionality methods toward deeper insights into the vibronic structure of liquids.
Tilted hexagonal post arrays: DNA electrophoresis in anisotropic media
Chen, Zhen; Dorfman, Kevin D.
2013-01-01
Using Brownian dynamics simulations, we show that DNA electrophoresis in a hexagonal array of micron-sized posts changes qualitatively when the applied electric field vector is not coincident with the lattice vectors of the array. DNA electrophoresis in such “tilted” post arrays is superior to the standard “un-tilted” approach; while the time required to achieve a resolution of unity in a tilted post array is similar to an un-tilted array at a low electric field strengths, this time (i) decreases exponentially with electric field strength in a tilted array and (ii) increases exponentially with electric field strength in an un-tilted array. Although the DNA dynamics in a post array are complicated, the electrophoretic mobility results indicate that the “free path”, i.e., the average distance of ballistic trajectories of point sized particles launched from random positions in the unit cell until they intersect the next post, is a useful proxy for the detailed DNA trajectories. The analysis of the free path reveals a fundamental connection between anisotropy of the medium and DNA transport therein that goes beyond simply improving the separation device. PMID:23868490
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
Propagating Qualitative Values Through Quantitative Equations
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
1992-01-01
In most practical problems where traditional numeric simulation is not adequate, one need to reason about a system with both qualitative and quantitative equations. In this paper, we address the problem of propagating qualitative values represented as interval values through quantitative equations. Previous research has produced exponential-time algorithms for approximate solution of the problem. These may not meet the stringent requirements of many real time applications. This paper advances the state of art by producing a linear-time algorithm that can propagate a qualitative value through a class of complex quantitative equations exactly and through arbitrary algebraic expressions approximately. The algorithm was found applicable to Space Shuttle Reaction Control System model.
Measurement of the aerosol absorption coefficient with the nonequilibrium process
NASA Astrophysics Data System (ADS)
Li, Liang; Li, Jingxuan; Bai, Hailong; Li, Baosheng; Liu, Shanlin; Zhang, Yang
2018-02-01
On the basis of the conventional Jamin interferometer,the improved measuring method is proposed that using a polarization type reentrant Jamin interferometer measures atmospheric aerosol absorption coefficient under the photothermal effect.The paper studies the relationship between the absorption coefficient of atmospheric aerosol particles and the refractive index change of the atmosphere.In Matlab environment, the variation curves of the output voltage of the interferometer with different concentration aerosol samples under stimulated laser irradiation were plotted.Besides, the paper also studies the relationship between aerosol concentration and the time required for the photothermal effect to reach equilibrium.When using the photothermal interferometry the results show that the time required for the photothermal effect to reach equilibrium is also increasing with the increasing concentration of aerosol particles,the absorption coefficient and time of aerosol in the process of nonequilibrium are exponentially changing.
Application of Krylov exponential propagation to fluid dynamics equations
NASA Technical Reports Server (NTRS)
Saad, Youcef; Semeraro, David
1991-01-01
An application of matrix exponentiation via Krylov subspace projection to the solution of fluid dynamics problems is presented. The main idea is to approximate the operation exp(A)v by means of a projection-like process onto a krylov subspace. This results in a computation of an exponential matrix vector product similar to the one above but of a much smaller size. Time integration schemes can then be devised to exploit this basic computational kernel. The motivation of this approach is to provide time-integration schemes that are essentially of an explicit nature but which have good stability properties.
Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.
2007-01-01
Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.
Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier
2016-01-01
Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Understanding quantum tunneling using diffusion Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.
2018-03-01
In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
Design of a 9-loop quasi-exponential waveform generator
NASA Astrophysics Data System (ADS)
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
Design of a 9-loop quasi-exponential waveform generator.
Banerjee, Partha; Shukla, Rohit; Shyam, Anurag
2015-12-01
We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.
Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve
2009-09-08
We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.
Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case
NASA Astrophysics Data System (ADS)
Raja, R.; Marshal Anthoni, S.
2011-02-01
This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.
Objective-function hybridization in adjoint seismic tomography
NASA Astrophysics Data System (ADS)
Yuan, Yanhua O.; Bozdaǧ, Ebru; Simons, Frederik J.; Gao, Fuchun
2017-04-01
Seismic tomography is at the threshold of a new era of massive data sets. Improving the resolution and accuracy of the estimated Earth structure by assimilating as much information as possible from every seismogram, remains a challenge. We propose the use of the "exponentiated phase'', a type of measurement that robustly captures the information contained in the variation of phase with time in the seismogram. We explore its performance in both conventional and double-difference (Yuan, Simons & Tromp, Geophys. J. Intern, 2016) adjoint seismic tomography. We introduce a hybrid approach to combine different objective functions, taking advantage of both conventional and our new measurements. We initially focus on phase measurements in global tomography. Cross-correlation measurements are generally tailored by window selection algorithms, such as FLEXWIN, to balance amplitude differences between seismic phases. However, within selection windows, such measurements still favor the larger-amplitude phases. It is also difficult to select all usable portions of the seismogram in an optimal way, such that much information may be lost, particularly the scattered waves. Time-continuous phase measurements, which associate a time shift with each point in time, have the potential to extract information from every wiggle in the seismogram without cutting it into small pieces. One such type of measurement is the instantaneous phase (Bozdaǧ, Trampert & Tromp, Geophys. J. Intern, 2011), which thus far has not been implemented in realistic seismic-tomography experiments, given how difficult the computation of phase can sometimes be. The exponentiated phase, on the other hand, is computed on the basis of the normalized analytic signal, does not need an explicit measure of phase, and is thus much easier to implement, and more practical for real-world applications. Both types of measurements carry comparable structural information when direct measurements of the phase are not wrapped. To deal with cycle skips, we use the exponentiated phase to take into account relatively small-magnitude scattered waves at long periods, while using cross-correlation measurements on windows determined by FLEXWIN to select distinct body-wave arrivals without complicating measurements due to non-linearities at short periods. We present synthetic experiments to show how exponentiated-phase, cross-correlation measurements, and their hybridization affect tomographic results. We demonstrate the use of hybrid measurements on teleseismic seismograms, in which surface waves are prominent, for continental and global seismic imaging. It is clear that the exponentiated-phase measurements behave well and provide a better representation of the smaller phases in the adjoint sources required for the computation of the misfit gradient. The combination of two different types of phase measurements in a hybrid approach moves us towards using all of the available information in a data set, addressing data quality and measurement challenges simultaneously, while negligibly affecting computation time.
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
Global exponential stability of BAM neural networks with time-varying delays and diffusion terms
NASA Astrophysics Data System (ADS)
Wan, Li; Zhou, Qinghua
2007-11-01
The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established.
Conditional optimal spacing in exponential distribution.
Park, Sangun
2006-12-01
In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.
Moghimi, Fatemeh Hoda; Cheung, Michael; Wickramasinghe, Nilmini
2013-01-01
Healthcare is an information rich industry where successful outcomes require the processing of multi-spectral data and sound decision making. The exponential growth of data and big data issues coupled with a rapid increase of service demands in healthcare contexts today, requires a robust framework enabled by IT (information technology) solutions as well as real-time service handling in order to ensure superior decision making and successful healthcare outcomes. Such a context is appropriate for the application of real time intelligent risk detection decision support systems using predictive analytic techniques such as data mining. To illustrate the power and potential of data science technologies in healthcare decision making scenarios, the use of an intelligent risk detection (IRD) model is proffered for the context of Congenital Heart Disease (CHD) in children, an area which requires complex high risk decisions that need to be made expeditiously and accurately in order to ensure successful healthcare outcomes.
Exponential stability preservation in semi-discretisations of BAM networks with nonlinear impulses
NASA Astrophysics Data System (ADS)
Mohamad, Sannay; Gopalsamy, K.
2009-01-01
This paper demonstrates the reliability of a discrete-time analogue in preserving the exponential convergence of a bidirectional associative memory (BAM) network that is subject to nonlinear impulses. The analogue derived from a semi-discretisation technique with the value of the time-step fixed is treated as a discrete-time dynamical system while its exponential convergence towards an equilibrium state is studied. Thereby, a family of sufficiency conditions governing the network parameters and the impulse magnitude and frequency is obtained for the convergence. As special cases, one can obtain from our results, those corresponding to the non-impulsive discrete-time BAM networks and also those corresponding to continuous-time (impulsive and non-impulsive) systems. A relation between the Lyapunov exponent of the non-impulsive system and that of the impulsive system involving the size of the impulses and the inter-impulse intervals is obtained.
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.
Thiébaut, Anne C M; Bénichou, Jacques
2004-12-30
Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.
Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution
NASA Astrophysics Data System (ADS)
Bell, Eric F.
2002-12-01
Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.
The mechanism of double-exponential growth in hyper-inflation
NASA Astrophysics Data System (ADS)
Mizuno, T.; Takayasu, M.; Takayasu, H.
2002-05-01
Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Multiplexed memory-insensitive quantum repeaters.
Collins, O A; Jenkins, S D; Kuzmich, A; Kennedy, T A B
2007-02-09
Long-distance quantum communication via distant pairs of entangled quantum bits (qubits) is the first step towards secure message transmission and distributed quantum computing. To date, the most promising proposals require quantum repeaters to mitigate the exponential decrease in communication rate due to optical fiber losses. However, these are exquisitely sensitive to the lifetimes of their memory elements. We propose a multiplexing of quantum nodes that should enable the construction of quantum networks that are largely insensitive to the coherence times of the quantum memory elements.
Erik A. Lilleskov
2017-01-01
Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
A statistical study of decaying kink oscillations detected using SDO/AIA
NASA Astrophysics Data System (ADS)
Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.
2016-01-01
Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.
Velocity storage contribution to vestibular self-motion perception in healthy human subjects.
Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A
2011-01-01
Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.
Photocounting distributions for exponentially decaying sources.
Teich, M C; Card, H C
1979-05-01
Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.
NASA Astrophysics Data System (ADS)
Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu
2005-10-01
A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.
A Blueprint for Demonstrating Quantum Supremacy with Superconducting Qubits
NASA Technical Reports Server (NTRS)
Kechedzhi, Kostyantyn
2018-01-01
Long coherence times and high fidelity control recently achieved in scalable superconducting circuits paved the way for the growing number of experimental studies of many-qubit quantum coherent phenomena in these devices. Albeit full implementation of quantum error correction and fault tolerant quantum computation remains a challenge the near term pre-error correction devices could allow new fundamental experiments despite inevitable accumulation of errors. One such open question foundational for quantum computing is achieving the so called quantum supremacy, an experimental demonstration of a computational task that takes polynomial time on the quantum computer whereas the best classical algorithm would require exponential time and/or resources. It is possible to formulate such a task for a quantum computer consisting of less than a 100 qubits. The computational task we consider is to provide approximate samples from a non-trivial quantum distribution. This is a generalization for the case of superconducting circuits of ideas behind boson sampling protocol for quantum optics introduced by Arkhipov and Aaronson. In this presentation we discuss a proof-of-principle demonstration of such a sampling task on a 9-qubit chain of superconducting gmon qubits developed by Google. We discuss theoretical analysis of the driven evolution of the device resulting in output approximating samples from a uniform distribution in the Hilbert space, a quantum chaotic state. We analyze quantum chaotic characteristics of the output of the circuit and the time required to generate a sufficiently complex quantum distribution. We demonstrate that the classical simulation of the sampling output requires exponential resources by connecting the task of calculating the output amplitudes to the sign problem of the Quantum Monte Carlo method. We also discuss the detailed theoretical modeling required to achieve high fidelity control and calibration of the multi-qubit unitary evolution in the device. We use a novel cross-entropy statistical metric as a figure of merit to verify the output and calibrate the device controls. Finally, we demonstrate the statistics of the wave function amplitudes generated on the 9-gmon chain and verify the quantum chaotic nature of the generated quantum distribution. This verifies the implementation of the quantum supremacy protocol.
NASA Astrophysics Data System (ADS)
Van Mieghem, P.; van de Bovenkamp, R.
2013-03-01
Most studies on susceptible-infected-susceptible epidemics in networks implicitly assume Markovian behavior: the time to infect a direct neighbor is exponentially distributed. Much effort so far has been devoted to characterize and precisely compute the epidemic threshold in susceptible-infected-susceptible Markovian epidemics on networks. Here, we report the rather dramatic effect of a nonexponential infection time (while still assuming an exponential curing time) on the epidemic threshold by considering Weibullean infection times with the same mean, but different power exponent α. For three basic classes of graphs, the Erdős-Rényi random graph, scale-free graphs and lattices, the average steady-state fraction of infected nodes is simulated from which the epidemic threshold is deduced. For all graph classes, the epidemic threshold significantly increases with the power exponents α. Hence, real epidemics that violate the exponential or Markovian assumption can behave seriously differently than anticipated based on Markov theory.
Ring-Shaped Microlanes and Chemical Barriers as a Platform for Probing Single-Cell Migration.
Schreiber, Christoph; Segerer, Felix J; Wagner, Ernst; Roidl, Andreas; Rädler, Joachim O
2016-05-31
Quantification and discrimination of pharmaceutical and disease-related effects on cell migration requires detailed characterization of single-cell motility. In this context, micropatterned substrates that constrain cells within defined geometries facilitate quantitative readout of locomotion. Here, we study quasi-one-dimensional cell migration in ring-shaped microlanes. We observe bimodal behavior in form of alternating states of directional migration (run state) and reorientation (rest state). Both states show exponential lifetime distributions with characteristic persistence times, which, together with the cell velocity in the run state, provide a set of parameters that succinctly describe cell motion. By introducing PEGylated barriers of different widths into the lane, we extend this description by quantifying the effects of abrupt changes in substrate chemistry on migrating cells. The transit probability decreases exponentially as a function of barrier width, thus specifying a characteristic penetration depth of the leading lamellipodia. Applying this fingerprint-like characterization of cell motion, we compare different cell lines, and demonstrate that the cancer drug candidate salinomycin affects transit probability and resting time, but not run time or run velocity. Hence, the presented assay allows to assess multiple migration-related parameters, permits detailed characterization of cell motility, and has potential applications in cell biology and advanced drug screening.
A simple approach to measure transmissibility and forecast incidence.
Nouvellet, Pierre; Cori, Anne; Garske, Tini; Blake, Isobel M; Dorigatti, Ilaria; Hinsley, Wes; Jombart, Thibaut; Mills, Harriet L; Nedjati-Gilani, Gemma; Van Kerkhove, Maria D; Fraser, Christophe; Donnelly, Christl A; Ferguson, Neil M; Riley, Steven
2018-03-01
Outbreaks of novel pathogens such as SARS, pandemic influenza and Ebola require substantial investments in reactive interventions, with consequent implementation plans sometimes revised on a weekly basis. Therefore, short-term forecasts of incidence are often of high priority. In light of the recent Ebola epidemic in West Africa, a forecasting exercise was convened by a network of infectious disease modellers. The challenge was to forecast unseen "future" simulated data for four different scenarios at five different time points. In a similar method to that used during the recent Ebola epidemic, we estimated current levels of transmissibility, over variable time-windows chosen in an ad hoc way. Current estimated transmissibility was then used to forecast near-future incidence. We performed well within the challenge and often produced accurate forecasts. A retrospective analysis showed that our subjective method for deciding on the window of time with which to estimate transmissibility often resulted in the optimal choice. However, when near-future trends deviated substantially from exponential patterns, the accuracy of our forecasts was reduced. This exercise highlights the urgent need for infectious disease modellers to develop more robust descriptions of processes - other than the widespread depletion of susceptible individuals - that produce non-exponential patterns of incidence. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Vargas, Susana; Millán-Chiu, Blanca E; Arvizu-Medrano, Sofía M; Loske, Achim M; Rodríguez, Rogelio
2017-06-01
A comparison between plate counting (PC) and dynamic light scattering (DLS) is reported. PC is the standard technique to determine bacterial population as a function of time; however, this method has drawbacks, such as the cumbersome preparation and handling of samples, as well as the long time required to obtain results. Alternative methods based on optical density are faster, but do not distinguish viable from non-viable cells. These inconveniences are overcome by using DLS. Two different bacteria strains were considered: Escherichia coli and Staphylococcus aureus. DLS was performed at two different illuminating conditions: continuous and intermittent. By the increment of particle size as a function of time, it was possible to observe cell division and the formation of aggregates containing very few bacteria. The scattered intensity profiles showed the lag phase and the transition to the exponential phase of growth, providing a quantity proportional to viable bacteria concentration. The results revealed a clear and linear correlation in both lag and exponential phase, between the Log 10 (colony-forming units/mL) from PC and the Log 10 of the scattered intensity I s from DLS. These correlations provide a good support to use DLS as an alternative technique to determine bacterial population. Copyright © 2017 Elsevier B.V. All rights reserved.
Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte
2008-04-01
The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
Method for exponentiating in cryptographic systems
Brickell, Ernest F.; Gordon, Daniel M.; McCurley, Kevin S.
1994-01-01
An improved cryptographic method utilizing exponentiation is provided which has the advantage of reducing the number of multiplications required to determine the legitimacy of a message or user. The basic method comprises the steps of selecting a key from a preapproved group of integer keys g; exponentiating the key by an integer value e, where e represents a digital signature, to generate a value g.sup.e ; transmitting the value g.sup.e to a remote facility by a communications network; receiving the value g.sup.e at the remote facility; and verifying the digital signature as originating from the legitimate user. The exponentiating step comprises the steps of initializing a plurality of memory locations with a plurality of values g.sup.xi ; computi The United States Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the Department of Energy and AT&T Company.
Universality of accelerating change
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Shlesinger, Michael F.
2018-03-01
On large time scales the progress of human technology follows an exponential growth trend that is termed accelerating change. The exponential growth trend is commonly considered to be the amalgamated effect of consecutive technology revolutions - where the progress carried in by each technology revolution follows an S-curve, and where the aging of each technology revolution drives humanity to push for the next technology revolution. Thus, as a collective, mankind is the 'intelligent designer' of accelerating change. In this paper we establish that the exponential growth trend - and only this trend - emerges universally, on large time scales, from systems that combine together two elements: randomness and amalgamation. Hence, the universal generation of accelerating change can be attained by systems with no 'intelligent designer'.
Bradley, D. Nathan; Tucker, Gregory E.
2013-01-01
A sediment particle traversing the fluvial system may spend the majority of the total transit time at rest, stored in various sedimentary deposits. Floodplains are among the most important of these deposits, with the potential to store large amounts of sediment for long periods of time. The virtual velocity of a sediment grain depends strongly on the amount of time spent in storage, but little is known about sediment storage times. Measurements of floodplain vegetation age have suggested that storage times are exponentially distributed, a case that arises when all the sediment on a floodplain is equally vulnerable to erosion in a given interval. This assumption has been incorporated into sediment routing models, despite some evidence that younger sediment is more likely to be eroded from floodplains than older sediment. We investigate the relationship between sediment age and erosion, which we term the “erosion hazard,” with a model of a meandering river that constructs its floodplain by lateral accretion. We find that the erosion hazard decreases with sediment age, leading to a storage time distribution that is not exponential. We propose an alternate model that requires that channel motion is approximately diffusive and results in a heavy tailed distribution of storage time. The model applies to timescales over which the direction of channel motion is uncorrelated. We speculate that the lower end of this range of time is set by the meander cutoff timescale and the upper end is set by processes that limit the width of the meander belt.
Hardware accelerator of convolution with exponential function for image processing applications
NASA Astrophysics Data System (ADS)
Panchenko, Ivan; Bucha, Victor
2015-12-01
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Pulsar recoil by large-scale anisotropies in supernova explosions.
Scheck, L; Plewa, T; Janka, H-Th; Kifonidis, K; Müller, E
2004-01-09
Assuming that the neutrino luminosity from the neutron star core is sufficiently high to drive supernova explosions by the neutrino-heating mechanism, we show that low-mode (l=1,2) convection can develop from random seed perturbations behind the shock. A slow onset of the explosion is crucial, requiring the core luminosity to vary slowly with time, in contrast to the burstlike exponential decay assumed in previous work. Gravitational and hydrodynamic forces by the globally asymmetric supernova ejecta were found to accelerate the remnant neutron star on a time scale of more than a second to velocities above 500 km s(-1), in agreement with observed pulsar proper motions.
Tilted hexagonal post arrays: DNA electrophoresis in anisotropic media.
Chen, Zhen; Dorfman, Kevin D
2014-02-01
Using Brownian dynamics simulations, we show that DNA electrophoresis in a hexagonal array of micron-sized posts changes qualitatively when the applied electric field vector is not coincident with the lattice vectors of the array. DNA electrophoresis in such "tilted" post arrays is superior to the standard "un-tilted" approach; while the time required to achieve a resolution of unity in a tilted post array is similar to an un-tilted array at a low-electric field strengths, this time (i) decreases exponentially with electric field strength in a tilted array and (ii) increases exponentially with electric field strength in an un-tilted array. Although the DNA dynamics in a post array are complicated, the electrophoretic mobility results indicate that the "free path," i.e. the average distance of ballistic trajectories of point-sized particles launched from random positions in the unit cell until they intersect the next post, is a useful proxy for the detailed DNA trajectories. The analysis of the free path reveals a fundamental connection between anisotropy of the medium and DNA transport therein that goes beyond simply improving the separation device. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Deadline rush: a time management phenomenon and its mathematical description.
König, Cornelius J; Kleinmann, Martin
2005-01-01
A typical time management phenomenon is the rush before a deadline. Behavioral decision making research can be used to predict how behavior changes before a deadline. People are likely not to work on a project with a deadline in the far future because they generally discount future outcomes. Only when the deadline is close are people likely to work. On the basis of recent intertemporal choice experiments, the authors argue that a hyperbolic function should provide a more accurate description of the deadline rush than an exponential function predicted by an economic model of discounted utility. To show this, the fit of the hyperbolic and the exponential function were compared with data sets that describe when students study for exams. As predicted, the hyperbolic function fit the data significantly better than the exponential function. The implication for time management decisions is that they are most likely to be inconsistent over time (i.e., people make a plan how to use their time but do not follow it).
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
NASA Astrophysics Data System (ADS)
Shiau, Lie-Ding
2016-09-01
The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Kim, Keonwook
2013-08-23
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
`Un-Darkening' the Cosmos: New laws of physics for an expanding universe
NASA Astrophysics Data System (ADS)
George, William
2017-11-01
Dark matter is believed to exist because Newton's Laws are inconsistent with the visible matter in galaxies. Dark energy is necessary to explain the universe expansion. (also available from www.turbulence-online.com) suggested that the equations themselves might be in error because they implicitly assume that time is measured in linear increments. This presentation couples the possible non-linearity of time with an expanding universe. Maxwell's equations for an expanding universe with constant speed of light are shown to be invariant only if time itself is non-linear. Both linear and exponential expansion rates are considered. A linearly expanding universe corresponds to logarithmic time, while exponential expansion corresponds to exponentially varying time. Revised Newton's laws using either leads to different definitions of mass and kinetic energy, both of which appear time-dependent if expressed in linear time. And provide the possibility of explaining the astronomical observations without either dark matter or dark energy. We would have never noticed the differences on earth, since the leading term in both expansions is linear in δ /to where to is the current age.
Exponential localization of Wannier functions in insulators.
Brouder, Christian; Panati, Gianluca; Calandra, Matteo; Mourougane, Christophe; Marzari, Nicola
2007-01-26
The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.
Note: Attenuation motion of acoustically levitated spherical rotor
NASA Astrophysics Data System (ADS)
Lü, P.; Hong, Z. Y.; Yin, J. F.; Yan, N.; Zhai, W.; Wang, H. P.
2016-11-01
Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.
Note: Attenuation motion of acoustically levitated spherical rotor.
Lü, P; Hong, Z Y; Yin, J F; Yan, N; Zhai, W; Wang, H P
2016-11-01
Here we observe the attenuation motion of spherical rotors levitated by near-field acoustic radiation force and analyze the factors that affect the duration time of free rotation. It is found that the rotating speed of freely rotating rotor decreases exponentially with respect to time. The time constant of exponential attenuation motion depends mainly on the levitation height, the mass of rotor, and the depth of concave ultrasound emitter. Large levitation height, large mass of rotor, and small depth of concave emitter are beneficial to increase the time constant and hence extend the duration time of free rotation.
Nonlinear stability of the 1D Boltzmann equation in a periodic box
NASA Astrophysics Data System (ADS)
Wu, Kung-Chien
2018-05-01
We study the nonlinear stability of the Boltzmann equation in the 1D periodic box with size , where is the Knudsen number. The convergence rate is for small time region and exponential for large time region. Moreover, the exponential rate depends on the size of the domain (Knudsen number). This problem is highly nonlinear and hence we need more careful analysis to control the nonlinear term.
Understanding Exponential Growth: As Simple as a Drop in a Bucket.
ERIC Educational Resources Information Center
Goldberg, Fred; Shuman, James
1984-01-01
Provides procedures for a simple laboratory activity on exponential growth and its characteristic doubling time. The equipment needed consists of a large plastic bucket, an eyedropper, a stopwatch, an assortment of containers and graduated cylinders, and a supply of water. (JN)
Memory behaviors of entropy production rates in heat conduction
NASA Astrophysics Data System (ADS)
Li, Shu-Nan; Cao, Bing-Yang
2018-02-01
Based on the relaxation time approximation and first-order expansion, memory behaviors in heat conduction are found between the macroscopic and Boltzmann-Gibbs-Shannon (BGS) entropy production rates with exponentially decaying memory kernels. In the frameworks of classical irreversible thermodynamics (CIT) and BGS statistical mechanics, the memory dependency on the integrated history is unidirectional, while for the extended irreversible thermodynamics (EIT) and BGS entropy production rates, the memory dependences are bidirectional and coexist with the linear terms. When macroscopic and microscopic relaxation times satisfy a specific relationship, the entropic memory dependences will be eliminated. There also exist initial effects in entropic memory behaviors, which decay exponentially. The second-order term are also discussed, which can be understood as the global non-equilibrium degree. The effects of the second-order term are consisted of three parts: memory dependency, initial value and linear term. The corresponding memory kernels are still exponential and the initial effects of the global non-equilibrium degree also decay exponentially.
Corzett, Christopher H; Goodman, Myron F; Finkel, Steven E
2013-06-01
Escherichia coli DNA polymerases (Pol) II, IV, and V serve dual roles by facilitating efficient translesion DNA synthesis while simultaneously introducing genetic variation that can promote adaptive evolution. Here we show that these alternative polymerases are induced as cells transition from exponential to long-term stationary-phase growth in the absence of induction of the SOS regulon by external agents that damage DNA. By monitoring the relative fitness of isogenic mutant strains expressing only one alternative polymerase over time, spanning hours to weeks, we establish distinct growth phase-dependent hierarchies of polymerase mutant strain competitiveness. Pol II confers a significant physiological advantage by facilitating efficient replication and creating genetic diversity during periods of rapid growth. Pol IV and Pol V make the largest contributions to evolutionary fitness during long-term stationary phase. Consistent with their roles providing both a physiological and an adaptive advantage during stationary phase, the expression patterns of all three SOS polymerases change during the transition from log phase to long-term stationary phase. Compared to the alternative polymerases, Pol III transcription dominates during mid-exponential phase; however, its abundance decreases to <20% during long-term stationary phase. Pol IV transcription dominates as cells transition out of exponential phase into stationary phase and a burst of Pol V transcription is observed as cells transition from death phase to long-term stationary phase. These changes in alternative DNA polymerase transcription occur in the absence of SOS induction by exogenous agents and indicate that cell populations require appropriate expression of all three alternative DNA polymerases during exponential, stationary, and long-term stationary phases to attain optimal fitness and undergo adaptive evolution.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Qi, Donglian; Liu, Meiqin; Qiu, Meikang; Zhang, Senlin
2010-08-01
This brief studies exponential H(infinity) synchronization of a class of general discrete-time chaotic neural networks with external disturbance. On the basis of the drive-response concept and H(infinity) control theory, and using Lyapunov-Krasovskii (or Lyapunov) functional, state feedback controllers are established to not only guarantee exponential stable synchronization between two general chaotic neural networks with or without time delays, but also reduce the effect of external disturbance on the synchronization error to a minimal H(infinity) norm constraint. The proposed controllers can be obtained by solving the convex optimization problems represented by linear matrix inequalities. Most discrete-time chaotic systems with or without time delays, such as Hopfield neural networks, cellular neural networks, bidirectional associative memory networks, recurrent multilayer perceptrons, Cohen-Grossberg neural networks, Chua's circuits, etc., can be transformed into this general chaotic neural network to be H(infinity) synchronization controller designed in a unified way. Finally, some illustrated examples with their simulations have been utilized to demonstrate the effectiveness of the proposed methods.
Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.
García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A
2017-01-01
A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.
Li, Jiarong; Jiang, Haijun; Hu, Cheng; Yu, Zhiyong
2018-03-01
This paper is devoted to the exponential synchronization, finite time synchronization, and fixed-time synchronization of Cohen-Grossberg neural networks (CGNNs) with discontinuous activations and time-varying delays. Discontinuous feedback controller and Novel adaptive feedback controller are designed to realize global exponential synchronization, finite time synchronization and fixed-time synchronization by adjusting the values of the parameters ω in the controller. Furthermore, the settling time of the fixed-time synchronization derived in this paper is less conservative and more accurate. Finally, some numerical examples are provided to show the effectiveness and flexibility of the results derived in this paper. Copyright © 2018 Elsevier Ltd. All rights reserved.
Quasiprobability behind the out-of-time-ordered correlator
NASA Astrophysics Data System (ADS)
Yunger Halpern, Nicole; Swingle, Brian; Dressel, Justin
2018-04-01
Two topics, evolving rapidly in separate fields, were combined recently: the out-of-time-ordered correlator (OTOC) signals quantum-information scrambling in many-body systems. The Kirkwood-Dirac (KD) quasiprobability represents operators in quantum optics. The OTOC was shown to equal a moment of a summed quasiprobability [Yunger Halpern, Phys. Rev. A 95, 012120 (2017), 10.1103/PhysRevA.95.012120]. That quasiprobability, we argue, is an extension of the KD distribution. We explore the quasiprobability's structure from experimental, numerical, and theoretical perspectives. First, we simplify and analyze Yunger Halpern's weak-measurement and interference protocols for measuring the OTOC and its quasiprobability. We decrease, exponentially in system size, the number of trials required to infer the OTOC from weak measurements. We also construct a circuit for implementing the weak-measurement scheme. Next, we calculate the quasiprobability (after coarse graining) numerically and analytically: we simulate a transverse-field Ising model first. Then, we calculate the quasiprobability averaged over random circuits, which model chaotic dynamics. The quasiprobability, we find, distinguishes chaotic from integrable regimes. We observe nonclassical behaviors: the quasiprobability typically has negative components. It becomes nonreal in some regimes. The onset of scrambling breaks a symmetry that bifurcates the quasiprobability, as in classical-chaos pitchforks. Finally, we present mathematical properties. We define an extended KD quasiprobability that generalizes the KD distribution. The quasiprobability obeys a Bayes-type theorem, for example, that exponentially decreases the memory required to calculate weak values, in certain cases. A time-ordered correlator analogous to the OTOC, insensitive to quantum-information scrambling, depends on a quasiprobability closer to a classical probability. This work not only illuminates the OTOC's underpinnings, but also generalizes quasiprobability theory and motivates immediate-future weak-measurement challenges.
1986-07-16
present the design and results from the current flash spectroscopic system at the R.I. A hybrid mode-locked, cavity dumped dye laser is used to seed a...date require a HE sum of at least three exponentials to achieve an acceptable fit. Lettuce chloroplasts exhibit decay times of 100 psec., 500-600 psec...other lettuce preparations. A PS1 preparation from the cyanobacterium Chlorogloea Fritschii, which has been thoroughly characterised previously [2
Dynamics of polymer nanoparticles and chains.
NASA Astrophysics Data System (ADS)
Streletzky, Kiril; McKenna, John; Hillier, Gerry
2006-10-01
We present a Dynamic Light Scattering study of transport properties of the polymer chains and nanoparticles made out of the same starting solution. The spectra of both systems are highly non-exponential requiring a spectral time moment analysis. Our findings indicate the existence of several modes of relaxation in both systems. The comparison of the mean relaxation rates and diffusion coefficients of the different modes in two systems under good solvent conditions will be reported. Temperature sensitivity of the polymer nanoparticles and its possible applications in pharmaceutical, coatings, and petroleum industries will also be discussed.
Improved result on stability analysis of discrete stochastic neural networks with time delay
NASA Astrophysics Data System (ADS)
Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng
2009-04-01
This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
Linear or Exponential Number Lines
ERIC Educational Resources Information Center
Stafford, Pat
2011-01-01
Having decided to spend some time looking at one's understanding of numbers, the author was inspired by "Alex's Adventures in Numberland," by Alex Bellos to look at one's innate appreciation of number. Bellos quotes research studies suggesting that an individual's natural appreciation of numbers is more likely to be exponential rather…
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
Non-exponential kinetics of unfolding under a constant force.
Bell, Samuel; Terentjev, Eugene M
2016-11-14
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Non-exponential kinetics of unfolding under a constant force
NASA Astrophysics Data System (ADS)
Bell, Samuel; Terentjev, Eugene M.
2016-11-01
We examine the population dynamics of naturally folded globular polymers, with a super-hydrophobic "core" inserted at a prescribed point in the polymer chain, unfolding under an application of external force, as in AFM force-clamp spectroscopy. This acts as a crude model for a large class of folded biomolecules with hydrophobic or hydrogen-bonded cores. We find that the introduction of super-hydrophobic units leads to a stochastic variation in the unfolding rate, even when the positions of the added monomers are fixed. This leads to the average non-exponential population dynamics, which is consistent with a variety of experimental data and does not require any intrinsic quenched disorder that was traditionally thought to be at the origin of non-exponential relaxation laws.
Discrete-time BAM neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi
2007-07-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.
Discrete-time bidirectional associative memory neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liang, variable delays [rapid communication] J.; Cao, J.; Ho, D. W. C.
2005-02-01
Based on the linear matrix inequality (LMI), some sufficient conditions are presented in this Letter for the existence, uniqueness and global exponential stability of the equilibrium point of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Some of the stability criteria obtained in this Letter are delay-dependent, and some of them are delay-independent, they are less conservative than the ones reported so far in the literature. Furthermore, the results provide one more set of easily verified criteria for determining the exponential stability of discrete-time BAM neural networks.
Ulrich, Alexander; Andersen, Kasper R.; Schwartz, Thomas U.
2012-01-01
We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts. PMID:23300917
Ulrich, Alexander; Andersen, Kasper R; Schwartz, Thomas U
2012-01-01
We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts.
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
NASA Astrophysics Data System (ADS)
Ernazarov, K. K.
2017-12-01
We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.
Quantum Walk Schemes for Universal Quantum Computation
NASA Astrophysics Data System (ADS)
Underwood, Michael S.
Random walks are a powerful tool for the efficient implementation of algorithms in classical computation. Their quantum-mechanical analogues, called quantum walks, hold similar promise. Quantum walks provide a model of quantum computation that has recently been shown to be equivalent in power to the standard circuit model. As in the classical case, quantum walks take place on graphs and can undergo discrete or continuous evolution, though quantum evolution is unitary and therefore deterministic until a measurement is made. This thesis considers the usefulness of continuous-time quantum walks to quantum computation from the perspectives of both their fundamental power under various formulations, and their applicability in practical experiments. In one extant scheme, logical gates are effected by scattering processes. The results of an exhaustive search for single-qubit operations in this model are presented. It is shown that the number of distinct operations increases exponentially with the number of vertices in the scattering graph. A catalogue of all graphs on up to nine vertices that implement single-qubit unitaries at a specific set of momenta is included in an appendix. I develop a novel scheme for universal quantum computation called the discontinuous quantum walk, in which a continuous-time quantum walker takes discrete steps of evolution via perfect quantum state transfer through small 'widget' graphs. The discontinuous quantum-walk scheme requires an exponentially sized graph, as do prior discrete and continuous schemes. To eliminate the inefficient vertex resource requirement, a computation scheme based on multiple discontinuous walkers is presented. In this model, n interacting walkers inhabiting a graph with 2n vertices can implement an arbitrary quantum computation on an input of length n, an exponential savings over previous universal quantum walk schemes. This is the first quantum walk scheme that allows for the application of quantum error correction. The many-particle quantum walk can be viewed as a single quantum walk undergoing perfect state transfer on a larger weighted graph, obtained via equitable partitioning. I extend this formalism to non-simple graphs. Examples of the application of equitable partitioning to the analysis of quantum walks and many-particle quantum systems are discussed.
Reid, Michael S; Le, X Chris; Zhang, Hongquan
2018-04-27
Isothermal exponential amplification techniques, such as strand-displacement amplification (SDA), rolling circle amplification (RCA), loop-mediated isothermal amplification (LAMP), nucleic acid sequence-based amplification (NASBA), helicase-dependent amplification (HDA), and recombinase polymerase amplification (RPA), have great potential for on-site, point-of-care, and in-situ assay applications. These amplification techniques eliminate the need for temperature cycling required for polymerase chain reaction (PCR) while achieving comparable amplification yield. We highlight here recent advances in exponential amplification reaction (EXPAR) for the detection of nucleic acids, proteins, enzyme activities, cells, and metal ions. We discuss design strategies, enzyme reactions, detection techniques, and key features. Incorporation of fluorescence, colorimetric, chemiluminescence, Raman, and electrochemical approaches enables highly sensitive detection of a variety of targets. Remaining issues, such as undesirable background amplification resulting from non-specific template interactions, must be addressed to further improve isothermal and exponential amplification techniques. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Experimental Magnetohydrodynamic Energy Extraction from a Pulsed Detonation
2015-03-01
experimental data taken in this thesis will follow voltage profiles similar to Fig. 2. Notice the initial section in Fig. 2 shows exponential decay consistent...equal that time constant. The exponential curves in Fig. 2 show how changing the time constant can change the charge and/or discharge rate of the...see Fig. 1), at a sampling rate of 1 MHz. Shielded wire and a common ground were used throughout the DAQ system to avoid capacitive issues in the
Evo-SETI: A Mathematical Tool for Cladistics, Evolution, and SETI.
Maccone, Claudio
2017-04-06
The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time . We call b -lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b -lognormal in the time . Next, our "Peak-Locus Theorem" translates cladistics : each species created by evolution is a b -lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called "Geometric Brownian Motion" (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b -lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The "molecular clock" is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) "cubic" evolution: a curve of logarithmic increase.
NASA Astrophysics Data System (ADS)
Figueroa-Morales, N.; Rivera, A.; Altshuler, E.; Darnige, T.; Douarche, C.; Soto, R.; Lindner, A.; Clément, E.
The motility of E. Coli bacteria is described as a run and tumble process. Changes of direction correspond to a switch in the flagellar motor rotation. The run time distribution is described as an exponential decay of characteristic time close to 1s. Remarkably, it has been demonstrated that the generic response for the distribution of run times is not exponential, but a heavy tailed power law decay, which is at odds with the motility findings. We investigate the consequences of the motor statistics in the macroscopic bacterial transport. During upstream contamination processes in very confined channels, we have identified very long contamination tongues. Using a stochastic model considering bacterial dwelling times on the surfaces related to the run times, we are able to reproduce qualitatively and quantitatively the evolution of the contamination profiles when considering the power law run time distribution. However, the model fails to reproduce the qualitative dynamics when the classical exponential run and tumble distribution is considered. Moreover, we have corroborated the existence of a power law run time distribution by means of 3D Lagrangian tracking. We then argue that the macroscopic transport of bacteria is essentially determined by the motor rotation statistics.
Separating OR, SUM, and XOR Circuits.
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H
2016-08-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O ( n ), but require SUM-circuits of size Ω( n 3/2 /log 2 n ).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan
2016-11-01
Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.
Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis
NASA Astrophysics Data System (ADS)
Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.
2013-04-01
We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
Exponential Methods for the Time Integration of Schroedinger Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
A New Labor Theory of Value for Rational Planning Through Use of the Bourgeois Profit Rate
Weizsäcker, C. C. Von; Samuelson, Paul A.
1971-01-01
To maximaze steady-state per capita consumptions, goods should be valued at their “synchronized labor requirement costs”, which are shown to deviate from Marx's schemata of “values” but to coincide with bourgeois prices calculated at dated labor requirements, marked-up by compound interest, at a profit or interest rate equal to the system's rate of exponential growth. With capitalists saving all their incomes for future profits, workers get all there is to get. Departures from such an exogenous, or endogenous, golden-rule state are the rule in history rather than the exception. In the case of exponential labor-augmenting change, it is shown that competitive prices will equal historically embodied labor content. PMID:16591926
Fundamental Flux Equations for Fracture-Matrix Interactions with Linear Diffusion
NASA Astrophysics Data System (ADS)
Oldenburg, C. M.; Zhou, Q.; Rutqvist, J.; Birkholzer, J. T.
2017-12-01
The conventional dual-continuum models are only applicable for late-time behavior of pressure propagation in fractured rock, while discrete-fracture-network models may explicitly deal with matrix blocks at high computational expense. To address these issues, we developed a unified-form diffusive flux equation for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular matrix blocks (squares, cubes, rectangles, and rectangular parallelepipeds) by partitioning the entire dimensionless-time domain (Zhou et al., 2017a, b). For each matrix block, this flux equation consists of the early-time solution up until a switch-over time after which the late-time solution is applied to create continuity from early to late time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the coefficients dependent on dimensionless area-to-volume ratio and aspect ratios for rectangular blocks. For the late-time solutions, one exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic blocks. The time-partitioning method was also used for calculating pressure/concentration/temperature distribution within a matrix block. The approximate solution contains an error-function solution for early times and an exponential solution for late times, with relative errors less than 0.003. These solutions form the kernel of multirate and multidimensional hydraulic, solute and thermal diffusion in fractured reservoirs.
Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel
2012-06-01
We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.
Wang, Fen; Chen, Yuanlong; Liu, Meichun
2018-02-01
Stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays play an increasingly important role in the design and implementation of neural network systems. Under the framework of Filippov solutions, the issues of the pth moment exponential stability of stochastic memristor-based BAM neural networks are investigated. By using the stochastic stability theory, Itô's differential formula and Young inequality, the criteria are derived. Meanwhile, with Lyapunov approach and Cauchy-Schwarz inequality, we derive some sufficient conditions for the mean square exponential stability of the above systems. The obtained results improve and extend previous works on memristor-based or usual neural networks dynamical systems. Four numerical examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong
2018-08-01
This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Xiaofan; Fang, Jian-An; Li, Huiyuan
2017-09-01
This paper investigates master-slave exponential synchronization for a class of complex-valued memristor-based neural networks with time-varying delays via discontinuous impulsive control. Firstly, the master and slave complex-valued memristor-based neural networks with time-varying delays are translated to two real-valued memristor-based neural networks. Secondly, an impulsive control law is constructed and utilized to guarantee master-slave exponential synchronization of the neural networks. Thirdly, the master-slave synchronization problems are transformed into the stability problems of the master-slave error system. By employing linear matrix inequality (LMI) technique and constructing an appropriate Lyapunov-Krasovskii functional, some sufficient synchronization criteria are derived. Finally, a numerical simulation is provided to illustrate the effectiveness of the obtained theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong
2017-11-01
In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-04-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation
NASA Astrophysics Data System (ADS)
Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien
2018-06-01
We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.
Humans Can Adopt Optimal Discounting Strategy under Real-Time Constraints
Schweighofer, N; Shishida, K; Han, C. E; Okamoto, Y; Tanaka, S. C; Yamawaki, S; Doya, K
2006-01-01
Critical to our many daily choices between larger delayed rewards, and smaller more immediate rewards, are the shape and the steepness of the function that discounts rewards with time. Although research in artificial intelligence favors exponential discounting in uncertain environments, studies with humans and animals have consistently shown hyperbolic discounting. We investigated how humans perform in a reward decision task with temporal constraints, in which each choice affects the time remaining for later trials, and in which the delays vary at each trial. We demonstrated that most of our subjects adopted exponential discounting in this experiment. Further, we confirmed analytically that exponential discounting, with a decay rate comparable to that used by our subjects, maximized the total reward gain in our task. Our results suggest that the particular shape and steepness of temporal discounting is determined by the task that the subject is facing, and question the notion of hyperbolic reward discounting as a universal principle. PMID:17096592
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
Kim, Sangdan; Han, Suhee
2010-01-01
Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.
NASA Astrophysics Data System (ADS)
Bostrom, G.; Atkinson, D.; Rice, A.
2015-04-01
Cavity ringdown spectroscopy (CRDS) uses the exponential decay constant of light exiting a high-finesse resonance cavity to determine analyte concentration, typically via absorption. We present a high-throughput data acquisition system that determines the decay constant in near real time using the discrete Fourier transform algorithm on a field programmable gate array (FPGA). A commercially available, high-speed, high-resolution, analog-to-digital converter evaluation board system is used as the platform for the system, after minor hardware and software modifications. The system outputs decay constants at maximum rate of 4.4 kHz using an 8192-point fast Fourier transform by processing the intensity decay signal between ringdown events. We present the details of the system, including the modifications required to adapt the evaluation board to accurately process the exponential waveform. We also demonstrate the performance of the system, both stand-alone and incorporated into our existing CRDS system. Details of FPGA, microcontroller, and circuitry modifications are provided in the Appendix and computer code is available upon request from the authors.
Kim, Keonwook
2013-01-01
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Evo-SETI: A Mathematical Tool for Cladistics, Evolution, and SETI
Maccone, Claudio
2017-01-01
The discovery of new exoplanets makes us wonder where each new exoplanet stands along its way to develop life as we know it on Earth. Our Evo-SETI Theory is a mathematical way to face this problem. We describe cladistics and evolution by virtue of a few statistical equations based on lognormal probability density functions (pdf) in the time. We call b-lognormal a lognormal pdf starting at instant b (birth). Then, the lifetime of any living being becomes a suitable b-lognormal in the time. Next, our “Peak-Locus Theorem” translates cladistics: each species created by evolution is a b-lognormal whose peak lies on the exponentially growing number of living species. This exponential is the mean value of a stochastic process called “Geometric Brownian Motion” (GBM). Past mass extinctions were all-lows of this GBM. In addition, the Shannon Entropy (with a reversed sign) of each b-lognormal is the measure of how evolved that species is, and we call it EvoEntropy. The “molecular clock” is re-interpreted as the EvoEntropy straight line in the time whenever the mean value is exactly the GBM exponential. We were also able to extend the Peak-Locus Theorem to any mean value other than the exponential. For example, we derive in this paper for the first time the EvoEntropy corresponding to the Markov-Korotayev (2007) “cubic” evolution: a curve of logarithmic increase. PMID:28383497
Ammonia levels in the whelping nests of farmed raccoon dogs and polecats.
Korhonen, H; Harri, M
1986-01-01
Ammonia concentrations were measured in the nests of farmed raccoon dogs (Nyctereutes-procyonoides Gray, 1834) and polecats (Mustela putorius) at weaning time. Ammonia levels in the nests of raccoon dogs and polecats varied from 1 to 43 ppm and from 0 to 5 ppm, respectively. In the raccoon dog, with increasing litter size, the ammonia concentrations tended to increase exponentially. In the polecat, no marked relationship between litter size and ammonia levels were found. The results show that no special adaptations are required in farm life because even the highest ammonia concentrations measured were below the harmful level.
NASA Astrophysics Data System (ADS)
Morishita, Tetsuya
2012-07-01
We report a first-principles molecular-dynamics study of the relaxation dynamics in liquid silicon (l-Si) over a wide temperature range (1000-2200 K). We find that the intermediate scattering function for l-Si exhibits a compressed exponential decay above 1200 K including the supercooled regime, which is in stark contrast to that for normal "dense" liquids which typically show stretched exponential decay in the supercooled regime. The coexistence of particles having ballistic-like motion and those having diffusive-like motion is demonstrated, which accounts for the compressed exponential decay in l-Si. An attempt to elucidate the crossover from the ballistic to the diffusive regime in the "time-dependent" diffusion coefficient is made and the temperature-independent universal feature of the crossover is disclosed.
A nanostructured surface increases friction exponentially at the solid-gas interface.
Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E; Prashanthi, Kovur; Thundat, Thomas
2016-09-06
According to Stokes' law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.
A nanostructured surface increases friction exponentially at the solid-gas interface
NASA Astrophysics Data System (ADS)
Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E.; Prashanthi, Kovur; Thundat, Thomas
2016-09-01
According to Stokes’ law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.
NASA Astrophysics Data System (ADS)
Lu, Tiao; Cai, Wei
2008-10-01
In this paper, we propose a high order Fourier spectral-discontinuous Galerkin method for time-dependent Schrödinger-Poisson equations in 3-D spaces. The Fourier spectral Galerkin method is used for the two periodic transverse directions and a high order discontinuous Galerkin method for the longitudinal propagation direction. Such a combination results in a diagonal form for the differential operators along the transverse directions and a flexible method to handle the discontinuous potentials present in quantum heterojunction and supperlattice structures. As the derivative matrices are required for various time integration schemes such as the exponential time differencing and Crank Nicholson methods, explicit derivative matrices of the discontinuous Galerkin method of various orders are derived. Numerical results, using the proposed method with various time integration schemes, are provided to validate the method.
Matrix exponential-based closures for the turbulent subgrid-scale stress tensor.
Li, Yi; Chevillard, Laurent; Eyink, Gregory; Meneveau, Charles
2009-01-01
Two approaches for closing the turbulence subgrid-scale stress tensor in terms of matrix exponentials are introduced and compared. The first approach is based on a formal solution of the stress transport equation in which the production terms can be integrated exactly in terms of matrix exponentials. This formal solution of the subgrid-scale stress transport equation is shown to be useful to explore special cases, such as the response to constant velocity gradient, but neglecting pressure-strain correlations and diffusion effects. The second approach is based on an Eulerian-Lagrangian change of variables, combined with the assumption of isotropy for the conditionally averaged Lagrangian velocity gradient tensor and with the recent fluid deformation approximation. It is shown that both approaches lead to the same basic closure in which the stress tensor is expressed as the matrix exponential of the resolved velocity gradient tensor multiplied by its transpose. Short-time expansions of the matrix exponentials are shown to provide an eddy-viscosity term and particular quadratic terms, and thus allow a reinterpretation of traditional eddy-viscosity and nonlinear stress closures. The basic feasibility of the matrix-exponential closure is illustrated by implementing it successfully in large eddy simulation of forced isotropic turbulence. The matrix-exponential closure employs the drastic approximation of entirely omitting the pressure-strain correlation and other nonlinear scrambling terms. But unlike eddy-viscosity closures, the matrix exponential approach provides a simple and local closure that can be derived directly from the stress transport equation with the production term, and using physically motivated assumptions about Lagrangian decorrelation and upstream isotropy.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue
Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.
2004-01-01
The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
Problems Relating Mathematics and Science in the High School.
ERIC Educational Resources Information Center
Morrow, Richard; Beard, Earl
This document contains various science problems which require a mathematical solution. The problems are arranged under two general areas. The first (algebra I) contains biology, chemistry, and physics problems which require solutions related to linear equations, exponentials, and nonlinear equations. The second (algebra II) contains physics…
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Markov chains at the interface of combinatorics, computing, and statistical physics
NASA Astrophysics Data System (ADS)
Streib, Amanda Pascoe
The fields of statistical physics, discrete probability, combinatorics, and theoretical computer science have converged around efforts to understand random structures and algorithms. Recent activity in the interface of these fields has enabled tremendous breakthroughs in each domain and has supplied a new set of techniques for researchers approaching related problems. This thesis makes progress on several problems in this interface whose solutions all build on insights from multiple disciplinary perspectives. First, we consider a dynamic growth process arising in the context of DNA-based self-assembly. The assembly process can be modeled as a simple Markov chain. We prove that the chain is rapidly mixing for large enough bias in regions of Zd. The proof uses a geometric distance function and a variant of path coupling in order to handle distances that can be exponentially large. We also provide the first results in the case of fluctuating bias, where the bias can vary depending on the location of the tile, which arises in the nanotechnology application. Moreover, we use intuition from statistical physics to construct a choice of the biases for which the Markov chain Mmon requires exponential time to converge. Second, we consider a related problem regarding the convergence rate of biased permutations that arises in the context of self-organizing lists. The Markov chain Mnn in this case is a nearest-neighbor chain that allows adjacent transpositions, and the rate of these exchanges is governed by various input parameters. It was conjectured that the chain is always rapidly mixing when the inversion probabilities are positively biased, i.e., we put nearest neighbor pair x < y in order with bias 1/2 ≤ pxy ≤ 1 and out of order with bias 1 - pxy. The Markov chain Mmon was known to have connections to a simplified version of this biased card-shuffling. We provide new connections between Mnn and Mmon by using simple combinatorial bijections, and we prove that Mnn is always rapidly mixing for two general classes of positively biased { pxy}. More significantly, we also prove that the general conjecture is false by exhibiting values for the pxy, with 1/2 ≤ pxy ≤ 1 for all x < y, but for which the transposition chain will require exponential time to converge. Finally, we consider a model of colloids, which are binary mixtures of molecules with one type of molecule suspended in another. It is believed that at low density typical configurations will be well-mixed throughout, while at high density they will separate into clusters. This clustering has proved elusive to verify, since all local sampling algorithms are known to be inefficient at high density, and in fact a new nonlocal algorithm was recently shown to require exponential time in some cases. We characterize the high and low density phases for a general family of discrete interfering binary mixtures by showing that they exhibit a "clustering property" at high density and not at low density. The clustering property states that there will be a region that has very high area, very small perimeter, and high density of one type of molecule. Special cases of interfering binary mixtures include the Ising model at fixed magnetization and independent sets.
Development and growth of fruit bodies and crops of the button mushroom, Agaricus bisporus.
Straatsma, Gerben; Sonnenberg, Anton S M; van Griensven, Leo J L D
2013-10-01
We studied the appearance of fruit body primordia, the growth of individual fruit bodies and the development of the consecutive flushes of the crop. Relative growth, measured as cap expansion, was not constant. It started extremely rapidly, and slowed down to an exponential rate with diameter doubling of 1.7 d until fruit bodies showed maturation by veil breaking. Initially many outgrowing primordia were arrested, indicating nutritional competition. After reaching 10 mm diameter, no growth arrest occurred; all growing individuals, whether relatively large or small, showed an exponential increase of both cap diameter and biomass, until veil breaking. Biomass doubled in 0.8 d. Exponential growth indicates the absence of competition. Apparently there exist differential nutritional requirements for early growth and for later, continuing growth. Flushing was studied applying different picking sizes. An ordinary flushing pattern occurred at an immature picking size of 8 mm diameter (picking mushrooms once a day with a diameter above 8 mm). The smallest picking size yielded the highest number of mushrooms picked, confirming the competition and arrested growth of outgrowing primordia: competition seems less if outgrowing primordia are removed early. The flush duration (i.e. between the first and last picking moments) was not affected by picking size. At small picking size, the subsequent flushes were not fully separated in time but overlapped. Within 2 d after picking the first individuals of the first flush, primordia for the second flush started outgrowth. Our work supports the view that the acquisition of nutrients by the mycelium is demand rather than supply driven. For formation and early outgrowth of primordia, indications were found for an alternation of local and global control, at least in the casing layer. All these data combined, we postulate that flushing is the consequence of the depletion of some unknown specific nutrition required by outgrowing primordia. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.
Separating OR, SUM, and XOR Circuits☆
Find, Magnus; Göös, Mika; Järvisalo, Matti; Kaski, Petteri; Koivisto, Mikko; Korhonen, Janne H.
2017-01-01
Given a boolean n × n matrix A we consider arithmetic circuits for computing the transformation x ↦ Ax over different semirings. Namely, we study three circuit models: monotone OR-circuits, monotone SUM-circuits (addition of non-negative integers), and non-monotone XOR-circuits (addition modulo 2). Our focus is on separating OR-circuits from the two other models in terms of circuit complexity: We show how to obtain matrices that admit OR-circuits of size O(n), but require SUM-circuits of size Ω(n3/2/log2n).We consider the task of rewriting a given OR-circuit as a XOR-circuit and prove that any subquadratic-time algorithm for this task violates the strong exponential time hypothesis. PMID:28529379
A new look at atmospheric carbon dioxide
NASA Astrophysics Data System (ADS)
Hofmann, David J.; Butler, James H.; Tans, Pieter P.
Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
H∞ control problem of linear periodic piecewise time-delay systems
NASA Astrophysics Data System (ADS)
Xie, Xiaochen; Lam, James; Li, Panshuo
2018-04-01
This paper investigates the H∞ control problem based on exponential stability and weighted L2-gain analyses for a class of continuous-time linear periodic piecewise systems with time delay. A periodic piecewise Lyapunov-Krasovskii functional is developed by integrating a discontinuous time-varying matrix function with two global terms. By applying the improved constraints to the stability and L2-gain analyses, sufficient delay-dependent exponential stability and weighted L2-gain criteria are proposed for the periodic piecewise time-delay system. Based on these analyses, an H∞ control scheme is designed under the considerations of periodic state feedback control input and iterative optimisation. Finally, numerical examples are presented to illustrate the effectiveness of our proposed conditions.
An improved rainfall disaggregation technique for GCMs
NASA Astrophysics Data System (ADS)
Onof, C.; Mackay, N. G.; Oh, L.; Wheater, H. S.
1998-08-01
Meteorological models represent rainfall as a mean value for a grid square so that when the latter is large, a disaggregation scheme is required to represent the spatial variability of rainfall. In general circulation models (GCMs) this is based on an assumption of exponentiality of rainfall intensities and a fixed value of areal rainfall coverage, dependent on rainfall type. This paper examines these two assumptions on the basis of U.K. and U.S. radar data. Firstly, the coverage of an area is strongly dependent on its size, and this dependence exhibits a scaling law over a range of sizes. Secondly, the coverage is, of course, dependent on the resolution at which it is measured, although this dependence is weak at high resolutions. Thirdly, the time series of rainfall coverages has a long-tailed autocorrelation function which is comparable to that of the mean areal rainfalls. It is therefore possible to reproduce much of the temporal dependence of coverages by using a regression of the log of the mean rainfall on the log of the coverage. The exponential assumption is satisfactory in many cases but not able to reproduce some of the long-tailed dependence of some intensity distributions. Gamma and lognormal distributions provide a better fit in these cases, but they have their shortcomings and require a second parameter. An improved disaggregation scheme for GCMs is proposed which incorporates the previous findings to allow the coverage to be obtained for any area and any mean rainfall intensity. The parameters required are given and some of their seasonal behavior is analyzed.
Simulated quantum computation of molecular energies.
Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin
2005-09-09
The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.
A variational eigenvalue solver on a photonic quantum processor
Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.
2014-01-01
Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053
Elastically driven intermittent microscopic dynamics in soft solids
NASA Astrophysics Data System (ADS)
Bouzid, Mehdi; Colombo, Jader; Barbosa, Lucas Vieira; Del Gado, Emanuela
2017-06-01
Soft solids with tunable mechanical response are at the core of new material technologies, but a crucial limit for applications is their progressive aging over time, which dramatically affects their functionalities. The generally accepted paradigm is that such aging is gradual and its origin is in slower than exponential microscopic dynamics, akin to the ones in supercooled liquids or glasses. Nevertheless, time- and space-resolved measurements have provided contrasting evidence: dynamics faster than exponential, intermittency and abrupt structural changes. Here we use 3D computer simulations of a microscopic model to reveal that the timescales governing stress relaxation, respectively, through thermal fluctuations and elastic recovery are key for the aging dynamics. When thermal fluctuations are too weak, stress heterogeneities frozen-in upon solidification can still partially relax through elastically driven fluctuations. Such fluctuations are intermittent, because of strong correlations that persist over the timescale of experiments or simulations, leading to faster than exponential dynamics.
Exponential bound in the quest for absolute zero
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
Exponential bound in the quest for absolute zero.
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Time Correlations in Mode Hopping of Coupled Oscillators
NASA Astrophysics Data System (ADS)
Heltberg, Mathias L.; Krishna, Sandeep; Jensen, Mogens H.
2017-05-01
We study the dynamics in a system of coupled oscillators when Arnold Tongues overlap. By varying the initial conditions, the deterministic system can be attracted to different limit cycles. Adding noise, the mode hopping between different states become a dominating part of the dynamics. We simplify the system through a Poincare section, and derive a 1D model to describe the dynamics. We explain that for some parameter values of the external oscillator, the time distribution of occupancy in a state is exponential and thus memoryless. In the general case, on the other hand, it is a sum of exponential distributions characteristic of a system with time correlations.
Flows in a tube structure: Equation on the graph
NASA Astrophysics Data System (ADS)
Panasenko, Grigory; Pileckas, Konstantin
2014-08-01
The steady-state Navier-Stokes equations in thin structures lead to some elliptic second order equation for the macroscopic pressure on a graph. At the nodes of the graph the pressure satisfies Kirchoff-type junction conditions. In the non-steady case the problem for the macroscopic pressure on the graph becomes nonlocal in time. In the paper we study the existence and uniqueness of a solution to such one-dimensional model on the graph for a pipe-wise network. We also prove the exponential decay of the solution with respect to the time variable in the case when the data decay exponentially with respect to time.
Quantum Simulation of Tunneling in Small Systems
Sornborger, Andrew T.
2012-01-01
A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333
Low mass SN Ia and the late light curve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colgate, S.A.; Fryer, C.L.; Hand, K.P.
1995-12-31
The late bolometric light curves of type Ia supernovae, when measured accurately over several years, show an exponential decay with a 56d half-life over a drop in luminosity of 8 magnitudes (10 half-lives). The late-time light curve is thought to be governed by the decay of Co{sup 56}, whose 77d half-life must then be modified to account for the observed decay time. Two mechanisms, both relying upon the positron fraction of the Co{sup 56} decay, have been proposed to explain this modification. One explanation requires a large amount of emission at infra-red wavelengths where it would not be detected. Themore » other explanation has proposed a progressive transparency or leakage of the high energy positrons (Colgate, Petschek and Kriese, 1980). For the positrons to leak out of the expanding nebula at the required rate necessary to produce the modified 56d exponential, the mass of the ejecta from a one foe (10{sup 51} erg in kinetic energy) explosion must be small, M{sub ejec} = 0.4M{sub {circle_dot}} with M{sub ejec} {proportional_to} KE{sup 0.5}. Thus, in this leakage explanation, any reasonable estimate of the total energy of the explosion requires that the ejected mass be very much less than the Chandrasekhar mass of 1.4M{sub {circle_dot}}. This is very difficult to explain with the ``canonical`` Chandrasekhar-mass thermonuclear explosion that disintegrates the original white dwarf star. This result leads us to pursue alternate mechanisms of type Ia supernovae. These mechanisms include sub-Chandrasekhar thermonuclear explosions and the accretion induced collapse of Chandrasekhar mass white dwarfs. We will summarize the advantages and disadvantages of both mechanisms with considerable detail spent on our new accretion induced collapse simulations. These mechanisms lead to lower Ni{sup 56} production and hence result in type Ia supernovae with luminosities decreased down to {approximately} 50% that predicted by the ``standard`` model.« less
Time scale defined by the fractal structure of the price fluctuations in foreign exchange markets
NASA Astrophysics Data System (ADS)
Kumagai, Yoshiaki
2010-04-01
In this contribution, a new time scale named C-fluctuation time is defined by price fluctuations observed at a given resolution. The intraday fractal structures and the relations of the three time scales: real time (physical time), tick time and C-fluctuation time, in foreign exchange markets are analyzed. The data set used is trading prices of foreign exchange rates; US dollar (USD)/Japanese yen (JPY), USD/Euro (EUR), and EUR/JPY. The accuracy of the data is one minute and data within a minute are recorded in order of transaction. The series of instantaneous velocity of C-fluctuation time flowing are exponentially distributed for small C when they are measured by real time and for tiny C when they are measured by tick time. When the market is volatile, for larger C, the series of instantaneous velocity are exponentially distributed.
Rapid growth of seed black holes in the early universe by supra-exponential accretion.
Alexander, Tal; Natarajan, Priyamvada
2014-09-12
Mass accretion by black holes (BHs) is typically capped at the Eddington rate, when radiation's push balances gravity's pull. However, even exponential growth at the Eddington-limited e-folding time t(E) ~ few × 0.01 billion years is too slow to grow stellar-mass BH seeds into the supermassive luminous quasars that are observed when the universe is 1 billion years old. We propose a dynamical mechanism that can trigger supra-exponential accretion in the early universe, when a BH seed is bound in a star cluster fed by the ubiquitous dense cold gas flows. The high gas opacity traps the accretion radiation, while the low-mass BH's random motions suppress the formation of a slowly draining accretion disk. Supra-exponential growth can thus explain the puzzling emergence of supermassive BHs that power luminous quasars so soon after the Big Bang. Copyright © 2014, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
Delay time correction of the gas analyzer in the calculation of anatomical dead space of the lung.
Okubo, T; Shibata, H; Takishima, T
1983-07-01
By means of a mathematical model, we have studied a way to correct the delay time of the gas analyzer in order to calculate the anatomical dead space using Fowler's graphical method. The mathematical model was constructed of ten tubes of equal diameter but unequal length, so that the amount of dead space varied from tube to tube; the tubes were emptied sequentially. The gas analyzer responds with a time lag from the input of the gas signal to the beginning of the response, followed by an exponential response output. The single breath expired volume-concentration relationship was examined with three types of expired flow patterns of which were constant, exponential and sinusoidal. The results indicate that the time correction by the lag time plus time constant of the exponential response of the gas analyzer gives an accurate estimation of anatomical dead space. Time correction less inclusive than this, e.g. lag time only or lag time plus 50% response time, gives an overestimation, and a correction larger than this results in underestimation. The magnitude of error is dependent on the flow pattern and flow rate. The time correction in this study is only for the calculation of dead space, as the corrected volume-concentration curves does not coincide with the true curve. Such correction of the output of the gas analyzer is extremely important when one needs to compare the dead spaces of different gas species at a rather faster flow rate.
Efficient Quantum Pseudorandomness.
Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał
2016-04-29
Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
A Spectral Lyapunov Function for Exponentially Stable LTV Systems
NASA Technical Reports Server (NTRS)
Zhu, J. Jim; Liu, Yong; Hang, Rui
2010-01-01
This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.
Exponential integration algorithms applied to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1991-01-01
Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.
Time Asymmetric Quantum Mechanics
NASA Astrophysics Data System (ADS)
Bohm, Arno R.; Gadella, Manuel; Kielanowski, Piotr
2011-09-01
The meaning of time asymmetry in quantum physics is discussed. On the basis of a mathematical theorem, the Stone-von Neumann theorem, the solutions of the dynamical equations, the Schrödinger equation (1) for states or the Heisenberg equation (6a) for observables are given by a unitary group. Dirac kets require the concept of a RHS (rigged Hilbert space) of Schwartz functions; for this kind of RHS a mathematical theorem also leads to time symmetric group evolution. Scattering theory suggests to distinguish mathematically between states (defined by a preparation apparatus) and observables (defined by a registration apparatus (detector)). If one requires that scattering resonances of width Γ and exponentially decaying states of lifetime τ=h/Γ should be the same physical entities (for which there is sufficient evidence) one is led to a pair of RHS's of Hardy functions and connected with it, to a semigroup time evolution t0≤t<∞, with the puzzling result that there is a quantum mechanical beginning of time, just like the big bang time for the universe, when it was a quantum system. The decay of quasi-stable particles is used to illustrate this quantum mechanical time asymmetry. From the analysis of these processes, we show that the properties of rigged Hilbert spaces of Hardy functions are suitable for a formulation of time asymmetry in quantum mechanics.
Adiabatic approximation with exponential accuracy for many-body systems and quantum computation
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Rezakhani, Ali T.; Hamma, Alioscia
2009-10-01
We derive a version of the adiabatic theorem that is especially suited for applications in adiabatic quantum computation, where it is reasonable to assume that the adiabatic interpolation between the initial and final Hamiltonians is controllable. Assuming that the Hamiltonian is analytic in a finite strip around the real-time axis, that some number of its time derivatives vanish at the initial and final times, and that the target adiabatic eigenstate is nondegenerate and separated by a gap from the rest of the spectrum, we show that one can obtain an error between the final adiabatic eigenstate and the actual time-evolved state which is exponentially small in the evolution time, where this time itself scales as the square of the norm of the time derivative of the Hamiltonian divided by the cube of the minimal gap.
Ryde, Ulf
2017-11-14
Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.
Proportional Feedback Control of Energy Intake During Obesity Pharmacotherapy.
Hall, Kevin D; Sanghvi, Arjun; Göbel, Britta
2017-12-01
Obesity pharmacotherapies result in an exponential time course for energy intake whereby large early decreases dissipate over time. This pattern of declining drug efficacy to decrease energy intake results in a weight loss plateau within approximately 1 year. This study aimed to elucidate the physiology underlying the exponential decay of drug effects on energy intake. Placebo-subtracted energy intake time courses were examined during long-term obesity pharmacotherapy trials for 14 different drugs or drug combinations within the theoretical framework of a proportional feedback control system regulating human body weight. Assuming each obesity drug had a relatively constant effect on average energy intake and did not affect other model parameters, our model correctly predicted that long-term placebo-subtracted energy intake was linearly related to early reductions in energy intake according to a prespecified equation with no free parameters. The simple model explained about 70% of the variance between drug studies with respect to the long-term effects on energy intake, although a significant proportional bias was evident. The exponential decay over time of obesity pharmacotherapies to suppress energy intake can be interpreted as a relatively constant effect of each drug superimposed on a physiological feedback control system regulating body weight. © 2017 The Obesity Society.
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
The diffusion of a Ga atom on GaAs(001)β2(2 × 4): Local superbasin kinetic Monte Carlo
NASA Astrophysics Data System (ADS)
Lin, Yangzheng; Fichthorn, Kristen A.
2017-10-01
We use first-principles density-functional theory to characterize the binding sites and diffusion mechanisms for a Ga adatom on the GaAs(001)β 2(2 × 4) surface. Diffusion in this system is a complex process involving eleven unique binding sites and sixteen different hops between neighboring binding sites. Among the binding sites, we can identify four different superbasins such that the motion between binding sites within a superbasin is much faster than hops exiting the superbasin. To describe diffusion, we use a recently developed local superbasin kinetic Monte Carlo (LSKMC) method, which accelerates a conventional kinetic Monte Carlo (KMC) simulation by describing the superbasins as absorbing Markov chains. We find that LSKMC is up to 4300 times faster than KMC for the conditions probed in this study. We characterize the distribution of exit times from the superbasins and find that these are sometimes, but not always, exponential and we characterize the conditions under which the superbasin exit-time distribution should be exponential. We demonstrate that LSKMC simulations assuming an exponential superbasin exit-time distribution yield the same diffusion coefficients as conventional KMC.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Analysis of two production inventory systems with buffer, retrials and different production rates
NASA Astrophysics Data System (ADS)
Jose, K. P.; Nair, Salini S.
2017-09-01
This paper considers the comparison of two ( {s,S} ) production inventory systems with retrials of unsatisfied customers. The time for producing and adding each item to the inventory is exponentially distributed with rate β. However, a production rate α β higher than β is used at the beginning of the production. The higher production rate will reduce customers' loss when inventory level approaches zero. The demand from customers is according to a Poisson process. Service times are exponentially distributed. Upon arrival, the customers enter into a buffer of finite capacity. An arriving customer, who finds the buffer full, moves to an orbit. They can retry from there and inter-retrial times are exponentially distributed. The two models differ in the capacity of the buffer. The aim is to find the minimum value of total cost by varying different parameters and compare the efficiency of the models. The optimum value of α corresponding to minimum total cost is an important evaluation. Matrix analytic method is used to find an algorithmic solution to the problem. We also provide several numerical or graphical illustrations.
Fast dynamics in glass-forming polymers revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colmenero, J.; Arbe, A.; Mijangos, C.
1997-12-31
The so called fast-dynamics of glass-forming systems as observed by time of flight (TOF) neutron scattering techniques is revisited. TOF-results corresponding to several glass-forming polymers with different chemical microstructure and glass-transition temperature are presented together with the theoretical framework proposed by the authors to interpret these results. The main conclusion is that the TOF-data can be explained in terms of quasiharmonic vibrations and the particular short time behavior of the segmental dynamics. The segmental dynamics display in the very short time range (t {approx} 2 ps) a crossover from a simple exponential behavior towards a non-exponential regime. The first exponentialmore » decay, which is controlled by C-C rotational barriers, can be understood as a trace of the behavior of the system in absence of the effects (correlations, cooperativity, memory effects {hor_ellipsis}) which characterize the dense supercooled liquid like state against the normal liquid state. The non-exponential regime at t > 2 ps corresponds to what is usually understood as {alpha} and {beta} relaxations. Some implications of these results are also discussed.« less
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2016-12-01
InSAR time series analysis allows reconstruction of ground deformation with meter-scale spatial resolution and high temporal sampling. For instance, the ESA Sentinel-1 Constellation is capable of providing 6-day temporal sampling, thereby opening a new window on the spatio-temporal behavior of tectonic processes. However, due to computational limitations, most time series methods rely on a pixel-by-pixel approach. This limitation is a concern because (1) accounting for orbital errors requires referencing all interferograms to a common set of pixels before reconstruction of the time series and (2) spatially correlated atmospheric noise due to tropospheric turbulence is ignored. Decomposing interferograms into statistically independent wavelets will mitigate issues of correlated noise, but prior estimation of orbital uncertainties will still be required. Here, we explore a method that considers all pixels simultaneously when solving for the spatio-temporal evolution of interferometric phase Our method is based on a massively parallel implementation of a conjugate direction solver. We consider an interferogram as the sum of the phase difference between 2 SAR acquisitions and the corresponding orbital errors. In addition, we fit the temporal evolution with a physically parameterized function while accounting for spatially correlated noise in the data covariance. We assume noise is isotropic for any given InSAR pair with a covariance described by an exponential function that decays with increasing separation distance between pixels. We regularize our solution in space using a similar exponential function as model covariance. Given the problem size, we avoid matrix multiplications of the full covariances by computing convolutions in the Fourier domain. We first solve the unregularized least squares problem using the LSQR algorithm to approach the final solution, then run our conjugate direction solver to account for data and model covariances. We present synthetic tests showing the efficiency of our method. We then reconstruct a 20-year continuous time series covering Northern Chile. Without input from any additional GNSS data, we recover the secular deformation rate, seasonal oscillations and the deformation fields from the 2005 Mw 7.8 Tarapaca and 2007 Mw 7.7 Tocopilla earthquakes.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
Self-replication: Nanostructure evolution
NASA Astrophysics Data System (ADS)
Simmel, Friedrich C.
2017-10-01
DNA origami nanostructures were utilized to replicate a seed pattern that resulted in the growth of populations of nanostructures. Exponential growth could be controlled by environmental conditions depending on the preferential requirements of each population.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Fine Grained Chaos in AdS2 Gravity
NASA Astrophysics Data System (ADS)
Haehl, Felix M.; Rozali, Moshe
2018-03-01
Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u^*. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS2 ) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2 k -point functions, characterized as being both "maximally braided" and "k -out of time order," which exhibit exponential growth until progressively longer time scales u^*(k)˜(k -1 )u^*. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.
Fine Grained Chaos in AdS_{2} Gravity.
Haehl, Felix M; Rozali, Moshe
2018-03-23
Quantum chaos can be characterized by an exponential growth of the thermal out-of-time-order four-point function up to a scrambling time u[over ^]_{*}. We discuss generalizations of this statement for certain higher-point correlation functions. For concreteness, we study the Schwarzian theory of a one-dimensional time reparametrization mode, which describes two-dimensional anti-de Sitter space (AdS_{2}) gravity and the low-energy dynamics of the Sachdev-Ye-Kitaev model. We identify a particular set of 2k-point functions, characterized as being both "maximally braided" and "k-out of time order," which exhibit exponential growth until progressively longer time scales u[over ^]_{*}^{(k)}∼(k-1)u[over ^]_{*}. We suggest an interpretation as scrambling of increasingly fine grained measures of quantum information, which correspondingly take progressively longer time to reach their thermal values.
On the origin of non-exponential fluorescence decays in enzyme-ligand complex
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2004-05-01
Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation
NASA Astrophysics Data System (ADS)
Katore, S. D.; Hatkar, S. P.; Baxi, R. J.
2016-12-01
We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.
NASA Astrophysics Data System (ADS)
Liao, Feng; Zhang, Luming; Wang, Shanshan
2018-02-01
In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.
Optical solver of combinatorial problems: nanotechnological approach.
Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor
2013-09-01
We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.
T7 phage factor required for managing RpoS in Escherichia coli.
Tabib-Salazar, Aline; Liu, Bing; Barker, Declan; Burchell, Lynn; Qimron, Udi; Matthews, Steve J; Wigneshweraraj, Sivaramesh
2018-06-05
T7 development in Escherichia coli requires the inhibition of the housekeeping form of the bacterial RNA polymerase (RNAP), Eσ 70 , by two T7 proteins: Gp2 and Gp5.7. Although the biological role of Gp2 is well understood, that of Gp5.7 remains to be fully deciphered. Here, we present results from functional and structural analyses to reveal that Gp5.7 primarily serves to inhibit Eσ S , the predominant form of the RNAP in the stationary phase of growth, which accumulates in exponentially growing E. coli as a consequence of the buildup of guanosine pentaphosphate [(p)ppGpp] during T7 development. We further demonstrate a requirement of Gp5.7 for T7 development in E. coli cells in the stationary phase of growth. Our finding represents a paradigm for how some lytic phages have evolved distinct mechanisms to inhibit the bacterial transcription machinery to facilitate phage development in bacteria in the exponential and stationary phases of growth.
Gaussian process regression for geometry optimization
NASA Astrophysics Data System (ADS)
Denzel, Alexander; Kästner, Johannes
2018-03-01
We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.
Quantifying the effect of 3D spatial resolution on the accuracy of microstructural distributions
NASA Astrophysics Data System (ADS)
Loughnane, Gregory; Groeber, Michael; Uchic, Michael; Riley, Matthew; Shah, Megna; Srinivasan, Raghavan; Grandhi, Ramana
The choice of spatial resolution for experimentally-collected 3D microstructural data is often governed by general rules of thumb. For example, serial section experiments often strive to collect at least ten sections through the average feature-of-interest. However, the desire to collect high resolution data in 3D is greatly tempered by the exponential growth in collection times and data storage requirements. This paper explores the use of systematic down-sampling of synthetically-generated grain microstructures to examine the effect of resolution on the calculated distributions of microstructural descriptors such as grain size, number of nearest neighbors, aspect ratio, and Ω3.
RSA Encryption with the TI-82.
ERIC Educational Resources Information Center
Sigmon, Neil; Yankosky, Bill
2002-01-01
Description of integrating one of the most widely used cryptosystems into a mathematics course for Liberal Arts majors. Application of this cryptosystem requires understanding of the concepts of exponentiation and modular arithmetic only. (MM)
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
NASA Astrophysics Data System (ADS)
Nath, G.; Pathak, R. P.; Dutta, Mrityunjoy
2018-01-01
Similarity solutions for the flow of a non-ideal gas behind a strong exponential shock driven out by a piston (cylindrical or spherical) moving with time according to an exponential law is obtained. Solutions are obtained, in both the cases, when the flow between the shock and the piston is isothermal or adiabatic. The shock wave is driven by a piston moving with time according to an exponential law. Similarity solutions exist only when the surrounding medium is of constant density. The effects of variation of ambient magnetic field, non-idealness of the gas, adiabatic exponent and gravitational parameter are worked out in detail. It is shown that the increase in the non-idealness of the gas or the adiabatic exponent of the gas or presence of magnetic field have decaying effect on the shock wave. Consideration of the isothermal flow and the self-gravitational field increase the shock strength. Also, the consideration of isothermal flow or the presence of magnetic field removes the singularity in the density distribution, which arises in the case of adiabatic flow. The result of our study may be used to interpret measurements carried out by space craft in the solar wind and in neighborhood of the Earth's magnetosphere.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sivagnanam, Kumaran; Raghavan, Vijaya G. S.; Shah, Manesh B
2012-01-01
Economically viable production of solvents through acetone butanol ethanol (ABE) fermentation requires a detailed understanding of Clostridium acetobutylicum. This study focuses on the proteomic profiling of C. acetobutylicum ATCC 824 from the stationary phase of ABE fermentation using xylose and compares with the exponential growth by shotgun proteomics approach. Comparative proteomic analysis revealed 22.9% of the C. acetobutylicum genome and 18.6% was found to be common in both exponential and stationary phases. The proteomic profile of C. acetobutylicum changed during the ABE fermentation such that 17 proteins were significantly differentially expressed between the two phases. Specifically, the expression of fivemore » proteins namely, CAC2873, CAP0164, CAP0165, CAC3298, and CAC1742 involved in the solvent production pathway were found to be significantly lower in the stationary phase compared to the exponential growth. Similarly, the expression of fucose isomerase (CAC2610), xylulose kinase (CAC2612), and a putative uncharacterized protein (CAC2611) involved in the xylose utilization pathway were also significantly lower in the stationary phase. These findings provide an insight into the metabolic behavior of C. acetobutylicum between different phases of ABE fermentation using xylose.« less
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Polar exponential sensor arrays unify iconic and Hough space representation
NASA Technical Reports Server (NTRS)
Weiman, Carl F. R.
1990-01-01
The log-polar coordinate system, inherent in both polar exponential sensor arrays and log-polar remapped video imagery, is identical to the coordinate system of its corresponding Hough transform parameter space. The resulting unification of iconic and Hough domains simplifies computation for line recognition and eliminates the slope quantization problems inherent in the classical Cartesian Hough transform. The geometric organization of the algorithm is more amenable to massively parallel architectures than that of the Cartesian version. The neural architecture of the human visual cortex meets the geometric requirements to execute 'in-place' log-Hough algorithms of the kind described here.
A discrete classical space-time could require 6 extra-dimensions
NASA Astrophysics Data System (ADS)
Guillemant, Philippe; Medale, Marc; Abid, Cherifa
2018-01-01
We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.
Non-Poissonian Distribution of Tsunami Waiting Times
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2007-12-01
Analysis of the global tsunami catalog indicates that tsunami waiting times deviate from an exponential distribution one would expect from a Poisson process. Empirical density distributions of tsunami waiting times were determined using both global tsunami origin times and tsunami arrival times at a particular site with a sufficient catalog: Hilo, Hawai'i. Most sources for the tsunamis in the catalog are earthquakes; other sources include landslides and volcanogenic processes. Both datasets indicate an over-abundance of short waiting times in comparison to an exponential distribution. Two types of probability models are investigated to explain this observation. Model (1) is a universal scaling law that describes long-term clustering of sources with a gamma distribution. The shape parameter (γ) for the global tsunami distribution is similar to that of the global earthquake catalog γ=0.63-0.67 [Corral, 2004]. For the Hilo catalog, γ is slightly greater (0.75-0.82) and closer to an exponential distribution. This is explained by the fact that tsunamis from smaller triggered earthquakes or landslides are less likely to be recorded at a far-field station such as Hilo in comparison to the global catalog, which includes a greater proportion of local tsunamis. Model (2) is based on two distributions derived from Omori's law for the temporal decay of triggered sources (aftershocks). The first is the ETAS distribution derived by Saichev and Sornette [2007], which is shown to fit the distribution of observed tsunami waiting times. The second is a simpler two-parameter distribution that is the exponential distribution augmented by a linear decay in aftershocks multiplied by a time constant Ta. Examination of the sources associated with short tsunami waiting times indicate that triggered events include both earthquake and landslide tsunamis that begin in the vicinity of the primary source. Triggered seismogenic tsunamis do not necessarily originate from the same fault zone, however. For example, subduction-thrust and outer-rise earthquake pairs are evident, such as the November 2006 and January 2007 Kuril Islands tsunamigenic pair. Because of variations in tsunami source parameters, such as water depth above the source, triggered tsunami events with short waiting times are not systematically smaller than the primary tsunami.
Electron spin dynamics and optical orientation of Mn2+ ions in GaAs
NASA Astrophysics Data System (ADS)
Akimov, I. A.; Dzhioev, R. I.; Korenev, V. L.; Kusrayev, Yu. G.; Sapega, V. F.; Yakovlev, D. R.; Bayer, M.
2013-04-01
We present an overview of spin-related phenomena in GaAs doped with low concentration of Mn-acceptors (below 1018 cm-3). We use the combination of different experimental techniques such as spin-flip Raman scattering and time-resolved photoluminescence. This allows to evaluate the time evolution of both electron and Mn spins. We show that optical orientation of Mn ions is possible under application of weak magnetic field, which is required to suppress the manganese spin relaxation. The optically oriented Mn2+ ions maintain the spin and return part of the polarization back to the electron spin system providing a long-lived electron spin memory. This leads to a bunch of spectacular effects such as non-exponential electron spin decay and spin precession in the effective exchange fields.
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...
2017-01-05
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
Feature Representations for Neuromorphic Audio Spike Streams.
Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii
2018-01-01
Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset.
Feature Representations for Neuromorphic Audio Spike Streams
Anumula, Jithendar; Neil, Daniel; Delbruck, Tobi; Liu, Shih-Chii
2018-01-01
Event-driven neuromorphic spiking sensors such as the silicon retina and the silicon cochlea encode the external sensory stimuli as asynchronous streams of spikes across different channels or pixels. Combining state-of-art deep neural networks with the asynchronous outputs of these sensors has produced encouraging results on some datasets but remains challenging. While the lack of effective spiking networks to process the spike streams is one reason, the other reason is that the pre-processing methods required to convert the spike streams to frame-based features needed for the deep networks still require further investigation. This work investigates the effectiveness of synchronous and asynchronous frame-based features generated using spike count and constant event binning in combination with the use of a recurrent neural network for solving a classification task using N-TIDIGITS18 dataset. This spike-based dataset consists of recordings from the Dynamic Audio Sensor, a spiking silicon cochlea sensor, in response to the TIDIGITS audio dataset. We also propose a new pre-processing method which applies an exponential kernel on the output cochlea spikes so that the interspike timing information is better preserved. The results from the N-TIDIGITS18 dataset show that the exponential features perform better than the spike count features, with over 91% accuracy on the digit classification task. This accuracy corresponds to an improvement of at least 2.5% over the use of spike count features, establishing a new state of the art for this dataset. PMID:29479300
Control of Growth Rate by Initial Substrate Concentration at Values Below Maximum Rate
Gaudy, Anthony F.; Obayashi, Alan; Gaudy, Elizabeth T.
1971-01-01
The hyperbolic relationship between specific growth rate, μ, and substrate concentration, proposed by Monod and used since as the basis for the theory of steady-state growth in continuous-flow systems, was tested experimentally in batch cultures. Use of a Flavobacterium sp. exhibiting a high saturation constant for growth in glucose minimal medium allowed direct measurement of growth rate and substrate concentration throughout the growth cycle in medium containing a rate-limiting initial concentration of glucose. Specific growth rates were also measured for a wide range of initial glucose concentrations. A plot of specific growth rate versus initial substrate concentration was found to fit the hyperbolic equation. However, the instantaneous relationship between specific growth rate and substrate concentration during growth, which is stated by the equation, was not observed. Well defined exponential growth phases were developed at initial substrate concentrations below that required for support of the maximum exponential growth rate and a constant doubling time was maintained until 50% of the substrate had been used. It is suggested that the external substrate concentration initially present “sets” the specific growth rate by establishing a steady-state internal concentration of substrate, possibly through control of the number of permeation sites. PMID:5137579
Transplanckian censorship and global cosmic strings
NASA Astrophysics Data System (ADS)
Dolan, Matthew J.; Draper, Patrick; Kozaczuk, Jonathan; Patel, Hiren
2017-04-01
Large field excursions are required in a number of axion models of inflation. These models also possess global cosmic strings, around which the axion follows a path mirroring the inflationary trajectory. Cosmic strings are thus an interesting theoretical laboratory for the study of transplanckian field excursions. We describe connections be-tween various effective field theory models of axion monodromy and study the classical spacetimes around their supercritical cosmic strings. For small decay constants f < M p and large winding numbers n > M p /f , the EFT is under control and the string cores undergo topological inflation, which may be either of exponential or power-law type. We show that the exterior spacetime is nonsingular and equivalent to a decompactifying cigar geometry, with the radion rolling in a potential generated by axion flux. Signals are able to circumnavigate infinite straight strings in finite but exponentially long time, t ˜ e Δ a/ M p . For finite loops of supercritical string in asymptotically flat space, we argue that if topological inflation occurs, then topological censorship implies transplanckian censorship, or that external observers are forbidden from threading the loop and observing the full excursion of the axion.
Technique for Very High Order Nonlinear Simulation and Validation
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2001-01-01
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, A.; Tsiounis, Y.; Frankel, Y.
Recently, there has been an interest in making electronic cash protocols more practical for electronic commerce by developing e-cash which is divisible (e.g., a coin which can be spent incrementally but total purchases are limited to the monetary value of the coin). In Crypto`95, T. Okamoto presented the first practical divisible, untraceable, off-line e-cash scheme, which requires only O(log N) computations for each of the withdrawal, payment and deposit procedures, where N = (total coin value)/(smallest divisible unit). However, Okamoto`s set-up procedure is quite inefficient (on the order of 4,000 multi-exponentiations and depending on the size of the RSA modulus).more » The authors formalize the notion of range-bounded commitment, originally used in Okamoto`s account establishment protocol, and present a very efficient instantiation which allows one to construct the first truly efficient divisible e-cash system. The scheme only requires the equivalent of one (1) exponentiation for set-up, less than 2 exponentiations for withdrawal and around 20 for payment, while the size of the coin remains about 300 Bytes. Hence, the withdrawal protocol is 3 orders of magnitude faster than Okamoto`s, while the rest of the system remains equally efficient, allowing for implementation in smart-cards. Similar to Okamoto`s, the scheme is based on proofs whose cryptographic security assumptions are theoretically clarified.« less
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.
Liu, Meiqin
2009-09-01
This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.
NASA Astrophysics Data System (ADS)
Cubrovic, Mihailo
2005-02-01
We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.
On the non-exponentiality of the dielectric Debye-like relaxation of monoalcohols
NASA Astrophysics Data System (ADS)
Arrese-Igor, S.; Alegría, A.; Colmenero, J.
2017-03-01
We have investigated the Debye-like relaxation in a series of monoalcohols (MAs) by broadband dielectric spectroscopy and thermally stimulated depolarization current techniques in order to get further insight on the time dispersion of this intriguing relaxation. Results indicate that the Debye-like relaxation of MAs is not always of exponential type and conforms well to a dispersion of Cole-Davidson type. Apart from the already reported non-exponentiality of the Debye-like relaxation in 2-hexyl-1-decanol and 2-butyl-1-octanol, a detailed analysis of the dielectric permittivity of 5-methyl-3-heptanol shows that this MA also presents some extent of dispersion on its Debye-like relaxation which strongly depends on the temperature. Results suggest that the non-exponential character of the Debye-like relaxation might be a general characteristic in the case of not so intense Debye-like relaxations relative to the α relaxation. Finally, we briefly discuss on the T-dependence and possible origin for the observed dispersion.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Krueger, W B; Kolodziej, B J
1976-01-01
Both atomic absorption spectrophotometry (AAS) and neutron activation analysis have been utilized to determine cellular Cu levels in Bacillus megaterium ATCC 19213. Both methods were selected for their sensitivity to detection of nanogram quantities of Cu. Data from both methods demonstrated identical patterms of Cu uptake during exponenetial growth and sporulation. Late exponential phase cells contained less Cu than postexponential t2 cells while t5 cells contained amounts equivalent to exponential cells. The t11 phase-bright forespore containing cells had a higher Cu content than those of earlier time periods, and the free spores had the highest Cu content. Analysis of the culture medium by AAS corroborated these data by showing concomitant Cu uptake during exponential growth and into t2 postexponential phase of sporulation. From t2 to t4, Cu egressed from the cells followed by a secondary uptake during the maturation of phase-dark forespores into phase-bright forespores (t6--t9).
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Moser, Barry Kurt; Halabi, Susan
2013-01-01
In this paper we develop the methodology for designing clinical trials with any factorial arrangement when the primary outcome is time to event. We provide a matrix formulation for calculating the sample size and study duration necessary to test any effect with a pre-specified type I error rate and power. Assuming that a time to event follows an exponential distribution, we describe the relationships between the effect size, the power, and the sample size. We present examples for illustration purposes. We provide a simulation study to verify the numerical calculations of the expected number of events and the duration of the trial. The change in the power produced by a reduced number of observations or by accruing no patients to certain factorial combinations is also described. PMID:25530661
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Huang, Juntao; Shu, Chi-Wang
2018-05-01
In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.
Yang, Wengui; Yu, Wenwu; Cao, Jinde; Alsaadi, Fuad E; Hayat, Tasawar
2018-02-01
This paper investigates the stability and lag synchronization for memristor-based fuzzy Cohen-Grossberg bidirectional associative memory (BAM) neural networks with mixed delays (asynchronous time delays and continuously distributed delays) and impulses. By applying the inequality analysis technique, homeomorphism theory and some suitable Lyapunov-Krasovskii functionals, some new sufficient conditions for the uniqueness and global exponential stability of equilibrium point are established. Furthermore, we obtain several sufficient criteria concerning globally exponential lag synchronization for the proposed system based on the framework of Filippov solution, differential inclusion theory and control theory. In addition, some examples with numerical simulations are given to illustrate the feasibility and validity of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Bièche, I; Olivi, M; Champème, M H; Vidaud, D; Lidereau, R; Vidaud, M
1998-11-23
Gene amplification is a common event in the progression of human cancers, and amplified oncogenes have been shown to have diagnostic, prognostic and therapeutic relevance. A kinetic quantitative polymerase-chain-reaction (PCR) method, based on fluorescent TaqMan methodology and a new instrument (ABI Prism 7700 Sequence Detection System) capable of measuring fluorescence in real-time, was used to quantify gene amplification in tumor DNA. Reactions are characterized by the point during cycling when PCR amplification is still in the exponential phase, rather than the amount of PCR product accumulated after a fixed number of cycles. None of the reaction components is limited during the exponential phase, meaning that values are highly reproducible in reactions starting with the same copy number. This greatly improves the precision of DNA quantification. Moreover, real-time PCR does not require post-PCR sample handling, thereby preventing potential PCR-product carry-over contamination; it possesses a wide dynamic range of quantification and results in much faster and higher sample throughput. The real-time PCR method, was used to develop and validate a simple and rapid assay for the detection and quantification of the 3 most frequently amplified genes (myc, ccndl and erbB2) in breast tumors. Extra copies of myc, ccndl and erbB2 were observed in 10, 23 and 15%, respectively, of 108 breast-tumor DNA; the largest observed numbers of gene copies were 4.6, 18.6 and 15.1, respectively. These results correlated well with those of Southern blotting. The use of this new semi-automated technique will make molecular analysis of human cancers simpler and more reliable, and should find broad applications in clinical and research settings.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
Personality influences temporal discounting preferences: behavioral and brain evidence.
Manning, Joshua; Hedden, Trey; Wickens, Nina; Whitfield-Gabrieli, Susan; Prelec, Drazen; Gabrieli, John D E
2014-09-01
Personality traits are stable predictors of many life outcomes that are associated with important decisions that involve tradeoffs over time. Therefore, a fundamental question is how tradeoffs over time vary from person to person in relation to stable personality traits. We investigated the influence of personality, as measured by the Five-Factor Model, on time preferences and on neural activity engaged by intertemporal choice. During functional magnetic resonance imaging (fMRI), participants made choices between smaller-sooner and larger-later monetary rewards. For each participant, we estimated a constant-sensitivity discount function that dissociates impatience (devaluation of future consequences) from time sensitivity (consistency with rational, exponential discounting). Overall, higher neuroticism was associated with a relatively greater preference for immediate rewards and higher conscientiousness with a relatively greater preference for delayed rewards. Specifically, higher conscientiousness correlated positively with lower short-term impatience and more exponential time preferences, whereas higher neuroticism (lower emotional stability) correlated positively with higher short-term impatience and less exponential time preferences. Cognitive-control and reward brain regions were more activated when higher conscientiousness participants selected a smaller-sooner reward and, conversely, when higher neuroticism participants selected a larger-later reward. The greater activations that occurred when choosing rewards that contradicted personality predispositions may reflect the greater recruitment of mental resources needed to override those predispositions. These findings reveal that stable personality traits fundamentally influence how rewards are chosen over time. Copyright © 2014 Elsevier Inc. All rights reserved.
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Huang, Haiying; Du, Qiaosheng; Kang, Xibing
2013-11-01
In this paper, a class of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays is investigated. The jumping parameters are modeled as a continuous-time finite-state Markov chain. At first, the existence of equilibrium point for the addressed neural networks is studied. By utilizing the Lyapunov stability theory, stochastic analysis theory and linear matrix inequality (LMI) technique, new delay-dependent stability criteria are presented in terms of linear matrix inequalities to guarantee the neural networks to be globally exponentially stable in the mean square. Numerical simulations are carried out to illustrate the main results. © 2013 ISA. Published by ISA. All rights reserved.
Unusually large Stokes shift for a near-infrared emitting DNA-stabilized silver nanocluster
NASA Astrophysics Data System (ADS)
Ammitzbøll Bogh, Sidsel; Carro-Temboury, Miguel R.; Cerretani, Cecilia; Swasey, Steven M.; Copp, Stacy M.; Gwinn, Elisabeth G.; Vosch, Tom
2018-04-01
In this paper we present a new near-IR emitting silver nanocluster (NIR-DNA-AgNC) with an unusually large Stokes shift between absorption and emission maximum (211 nm or 5600 cm-1). We studied the effect of viscosity and temperature on the steady state and time-resolved emission. The time-resolved results on NIR-DNA-AgNC show that the relaxation dynamics slow down significantly with increasing viscosity of the solvent. In high viscosity solution, the spectral relaxation stretches well into the nanosecond scale. As a result of this slow spectral relaxation in high viscosity solutions, a multi-exponential fluorescence decay time behavior is observed, in contrast to the more mono-exponential decay in low viscosity solution.
NASA Astrophysics Data System (ADS)
Yuan, Manman; Wang, Weiping; Luo, Xiong; Li, Lixiang; Kurths, Jürgen; Wang, Xiao
2018-03-01
This paper is concerned with the exponential lag function projective synchronization of memristive multidirectional associative memory neural networks (MMAMNNs). First, we propose a new model of MMAMNNs with mixed time-varying delays. In the proposed approach, the mixed delays include time-varying discrete delays and distributed time delays. Second, we design two kinds of hybrid controllers. Traditional control methods lack the capability of reflecting variable synaptic weights. In this paper, the controllers are carefully designed to confirm the process of different types of synchronization in the MMAMNNs. Third, sufficient criteria guaranteeing the synchronization of system are derived based on the derive-response concept. Finally, the effectiveness of the proposed mechanism is validated with numerical experiments.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
Effective equilibrium picture in the x y model with exponentially correlated noise
NASA Astrophysics Data System (ADS)
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the x y model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ , indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
Effective equilibrium picture in the xy model with exponentially correlated noise.
Paoluzzi, Matteo; Marconi, Umberto Marini Bettolo; Maggi, Claudio
2018-02-01
We study the effect of exponentially correlated noise on the xy model in the limit of small correlation time, discussing the order-disorder transition in the mean field and the topological transition in two dimensions. We map the steady states of the nonequilibrium dynamics into an effective equilibrium theory. In the mean field, the critical temperature increases with the noise correlation time τ, indicating that memory effects promote ordering. This finding is confirmed by numerical simulations. The topological transition temperature in two dimensions remains untouched. However, finite-size effects induce a crossover in the vortices proliferation that is confirmed by numerical simulations.
Time-resolved photoluminescence investigation of (Mg, Zn) O alloy growth on a non-polar plane
NASA Astrophysics Data System (ADS)
Mohammed Ali, Mohammed Jassim; Chauveau, J. M.; Bretagnon, T.
2018-04-01
Excitons recombination dynamics in ZnMgO alloy have been studied by time-resolved photoluminescence according to temperature. At low temperature, localisation effects of the exciton are found to play a significant role. The photoluminescence (PL) decays are bi-exponential. The short lifetime has a constant value, whereas the long lifetime shows a dependency with temperature. For temperature higher than 100 K the declines show a mono-exponential decay. The PL declines are dominated by non-radiative process at temperatures above 150 K. The PL lifetime dependancy with temperature is analysed using a model including localisation effects and non-radiative recombinations.
In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.
Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A
2018-04-01
Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.
2013-01-01
The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206
Banushkina, Polina V; Krivov, Sergei V
2013-12-10
The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.
2012-08-01
We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.
The Complexity of Folding Self-Folding Origami
NASA Astrophysics Data System (ADS)
Stern, Menachem; Pinson, Matthew B.; Murugan, Arvind
2017-10-01
Why is it difficult to refold a previously folded sheet of paper? We show that even crease patterns with only one designed folding motion inevitably contain an exponential number of "distractor" folding branches accessible from a bifurcation at the flat state. Consequently, refolding a sheet requires finding the ground state in a glassy energy landscape with an exponential number of other attractors of higher energy, much like in models of protein folding (Levinthal's paradox) and other NP-hard satisfiability (SAT) problems. As in these problems, we find that refolding a sheet requires actuation at multiple carefully chosen creases. We show that seeding successful folding in this way can be understood in terms of subpatterns that fold when cut out ("folding islands"). Besides providing guidelines for the placement of active hinges in origami applications, our results point to fundamental limits on the programmability of energy landscapes in sheets.
Detecting metrologically useful asymmetry and entanglement by a few local measurements
NASA Astrophysics Data System (ADS)
Zhang, Chao; Yadin, Benjamin; Hou, Zhi-Bo; Cao, Huan; Liu, Bi-Heng; Huang, Yun-Feng; Maity, Reevu; Vedral, Vlatko; Li, Chuan-Feng; Guo, Guang-Can; Girolami, Davide
2017-10-01
Important properties of a quantum system are not directly measurable, but they can be disclosed by how fast the system changes under controlled perturbations. In particular, asymmetry and entanglement can be verified by reconstructing the state of a quantum system. Yet, this usually requires experimental and computational resources which increase exponentially with the system size. Here we show how to detect metrologically useful asymmetry and entanglement by a limited number of measurements. This is achieved by studying how they affect the speed of evolution of a system under a unitary transformation. We show that the speed of multiqubit systems can be evaluated by measuring a set of local observables, providing exponential advantage with respect to state tomography. Indeed, the presented method requires neither the knowledge of the state and the parameter-encoding Hamiltonian nor global measurements performed on all the constituent subsystems. We implement the detection scheme in an all-optical experiment.
Heterogeneous characters modeling of instant message services users’ online behavior
Fang, Yajun; Horn, Berthold
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327
Heterogeneous characters modeling of instant message services users' online behavior.
Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E
2018-01-01
Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng
The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less
Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M
2017-01-01
The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.
Chaput, Gina; Markillie, Lye Meng; Mitchell, Hugh; Gaffrey, Matt; Orr, Galya; DeAngelis, Kristen M.
2017-01-01
The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growth conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future. PMID:29049419
Orellana, Roberto; Chaput, Gina; Markillie, Lye Meng; ...
2017-10-19
The production of lignocellulosic-derived biofuels is a highly promising source of alternative energy, but it has been constrained by the lack of a microbial platform capable to efficiently degrade this recalcitrant material and cope with by-products that can be toxic to cells. Species that naturally grow in environments where carbon is mainly available as lignin are promising for finding new ways of removing the lignin that protects cellulose for improved conversion of lignin to fuel precursors. Enterobacter lignolyticus SCF1 is a facultative anaerobic Gammaproteobacteria isolated from tropical rain forest soil collected in El Yunque forest, Puerto Rico under anoxic growthmore » conditions with lignin as sole carbon source. Whole transcriptome analysis of SCF1 during E.lignolyticus SCF1 lignin degradation was conducted on cells grown in the presence (0.1%, w/w) and the absence of lignin, where samples were taken at three different times during growth, beginning of exponential phase, mid-exponential phase and beginning of stationary phase. Lignin-amended cultures achieved twice the cell biomass as unamended cultures over three days, and in this time degraded 60% of lignin. Transcripts in early exponential phase reflected this accelerated growth. A complement of laccases, aryl-alcohol dehydrogenases, and peroxidases were most up-regulated in lignin amended conditions in mid-exponential and early stationary phases compared to unamended growth. The association of hydrogen production by way of the formate hydrogenlyase complex with lignin degradation suggests a possible value added to lignin degradation in the future.« less
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Limitations of Reliability for Long-Endurance Human Spaceflight
NASA Technical Reports Server (NTRS)
Owens, Andrew C.; de Weck, Olivier L.
2016-01-01
Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.
Brackney, Ryan J; Cheung, Timothy H. C; Neisewander, Janet L; Sanabria, Federico
2011-01-01
Dissociating motoric and motivational effects of pharmacological manipulations on operant behavior is a substantial challenge. To address this problem, we applied a response-bout analysis to data from rats trained to lever press for sucrose on variable-interval (VI) schedules of reinforcement. Motoric, motivational, and schedule factors (effort requirement, deprivation level, and schedule requirements, respectively) were manipulated. Bout analysis found that interresponse times (IRTs) were described by a mixture of two exponential distributions, one characterizing IRTs within response bouts, another characterizing intervals between bouts. Increasing effort requirement lengthened the shortest IRT (the refractory period between responses). Adding a ratio requirement increased the length and density of response bouts. Both manipulations also decreased the bout-initiation rate. In contrast, food deprivation only increased the bout-initiation rate. Changes in the distribution of IRTs over time showed that responses during extinction were also emitted in bouts, and that the decrease in response rate was primarily due to progressively longer intervals between bouts. Taken together, these results suggest that changes in the refractory period indicate motoric effects, whereas selective alterations in bout initiation rate indicate incentive-motivational effects. These findings support the use of response-bout analyses to identify the influence of pharmacological manipulations on processes underlying operant performance. PMID:21765544
Research on the exponential growth effect on network topology: Theoretical and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Shouwei; You, Zongjun
Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.
In vivo chlorine and sodium MRI of rat brain at 21.1 T.
Schepkin, Victor D; Elumalai, Malathy; Kitchen, Jason A; Qian, Chunqi; Gor'kov, Peter L; Brey, William W
2014-02-01
MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. MRI of (35)Cl and (23)Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. T 1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T 1a = 4.8 ms (0.7) T 1b = 24.4 ± 7 ms (0.3) and compared with sodium (T 1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of [Formula: see text] ms and [Formula: see text] ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression.
Inactivation of A currents and A channels on rat nodose neurons in culture
1989-01-01
Cultured sensory neurons from nodose ganglia were investigated with whole-cell patch-clamp techniques and single-channel recordings to characterize the A current. Membrane depolarization from -40 mV holding potential activated the delayed rectifier current (IK) at potentials positive to -30 mV; this current had a sigmoidal time course and showed little or no inactivation. In most neurons, the A current was completely inactivated at the -40 mV holding potential and required hyperpolarization to remove the inactivation; the A current was isolated by subtracting the IK evoked by depolarizations from -40 mV from the total outward current evoked by depolarizations from -90 mV. The decay of the A current on several neurons had complex kinetics and was fit by the sum of three exponentials whose time constants were 10- 40 ms, 100-350 ms, and 1-3 s. At the single-channel level we found that one class of channel underlies the A current. The conductance of A channels varied with the square root of the external K concentration: it was 22 pS when exposed to 5.4 mM K externally, the increased to 40 pS when exposed to 140 mM K externally. A channels activated rapidly upon depolarization and the latency to first opening decreased with depolarization. The open time distributions followed a single exponential and the mean open time increased with depolarization. A channels inactivate in three different modes: some A channels inactivated with little reopening and gave rise to ensemble averages that decayed in 10-40 ms; other A channels opened and closed three to four times before inactivating and gave rise to ensemble averages that decayed in 100-350 ms; still other A channels opened and closed several hundred times and required seconds to inactivate. Channels gating in all three modes contributed to the macroscopic A current from the whole cell, but their relative contribution differed among neurons. In addition, A channels could go directly from the closed, or resting, state to the inactivated state without opening, and the probability for channels inactivating in this way was greater at less depolarized voltages. In addition, a few A channels appeared to go reversibly from a mode where inactivation occurred rapidly to a slow mode of inactivation. PMID:2592953
Treatment of late time instabilities in finite-difference EMP scattering codes
NASA Astrophysics Data System (ADS)
Simpson, L. T.; Holland, R.; Arman, S.
1982-12-01
Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.
NASA Astrophysics Data System (ADS)
Vacik, J.; Hnatowicz, V.; Attar, F. M. D.; Mathakari, N. L.; Dahiwale, S. S.; Dhole, S. D.; Bhoraskar, V. N.
2014-10-01
Diffusion of lithium from a LiCl aqueous solution into polyether ether ketone (PEEK) and polyimide (PI) assisted by in situ irradiation with 6.5 MeV electrons was studied by the neutron depth profiling method. The number of the Li atoms was found to be roughly proportional to the diffusion time. Regardless of the diffusion time, the measured depth profiles in PEEK exhibit a nearly exponential form, indicating achievement of a steady-state phase of a diffusion-reaction process specified in the text. The form of the profiles in PI is more complex and it depends strongly on the diffusion time. For the longer diffusion time, the profile consists of near-surface bell-shaped part due to Fickian-like diffusion and deeper exponential part.
Velocity and stress autocorrelation decay in isothermal dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Chaudhri, Anuj; Lukes, Jennifer R.
2010-02-01
The velocity and stress autocorrelation decay in a dissipative particle dynamics ideal fluid model is analyzed in this paper. The autocorrelation functions are calculated at three different friction parameters and three different time steps using the well-known Groot/Warren algorithm and newer algorithms including self-consistent leap-frog, self-consistent velocity Verlet and Shardlow first and second order integrators. At low friction values, the velocity autocorrelation function decays exponentially at short times, shows slower-than exponential decay at intermediate times, and approaches zero at long times for all five integrators. As friction value increases, the deviation from exponential behavior occurs earlier and is more pronounced. At small time steps, all the integrators give identical decay profiles. As time step increases, there are qualitative and quantitative differences between the integrators. The stress correlation behavior is markedly different for the algorithms. The self-consistent velocity Verlet and the Shardlow algorithms show very similar stress autocorrelation decay with change in friction parameter, whereas the Groot/Warren and leap-frog schemes show variations at higher friction factors. Diffusion coefficients and shear viscosities are calculated using Green-Kubo integration of the velocity and stress autocorrelation functions. The diffusion coefficients match well-known theoretical results at low friction limits. Although the stress autocorrelation function is different for each integrator, fluctuates rapidly, and gives poor statistics for most of the cases, the calculated shear viscosities still fall within range of theoretical predictions and nonequilibrium studies.
The Intuitive Principal: A Guide to Leadership.
ERIC Educational Resources Information Center
Dyer, Karen M.; Carothers, Jacqueline
Professional demands on school administrators continue to multiply exponentially. Effective administrators require solid preparation programs, continuing professional development, extensive experience, mentoring, and the support of supervisor and school colleagues. Chapter 1, "Intuitive Ways of Knowing," references research on intuition,…
NASA Astrophysics Data System (ADS)
Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.
2016-12-01
Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.
Analysis and IbM simulation of the stages in bacterial lag phase: basis for an updated definition.
Prats, Clara; Giró, Antoni; Ferrer, Jordi; López, Daniel; Vives-Rego, Josep
2008-05-07
The lag phase is the initial phase of a culture that precedes exponential growth and occurs when the conditions of the culture medium differ from the pre-inoculation conditions. It is usually defined by means of cell density because the number of individuals remains approximately constant or slowly increases, and it is quantified with the lag parameter lambda. The lag phase has been studied through mathematical modelling and by means of specific experiments. In recent years, Individual-based Modelling (IbM) has provided helpful insights into lag phase studies. In this paper, the definition of lag phase is thoroughly examined. Evolution of the total biomass and the total number of bacteria during lag phase is tackled separately. The lag phase lasts until the culture reaches a maximum growth rate both in biomass and cell density. Once in the exponential phase, both rates are constant over time and equal to each other. Both evolutions are split into an initial phase and a transition phase, according to their growth rates. A population-level mathematical model is presented to describe the transitional phase in cell density. INDividual DIScrete SIMulation (INDISIM) is used to check the outcomes of this analysis. Simulations allow the separate study of the evolution of cell density and total biomass in a batch culture, they provide a depiction of different observed cases in lag evolution at the individual-cell level, and are used to test the population-level model. The results show that the geometrical lag parameter lambda is not appropriate as a universal definition for the lag phase. Moreover, the lag phase cannot be characterized by a single parameter. For the studied cases, the lag phases of both the total biomass and the population are required to fully characterize the evolution of bacterial cultures. The results presented prove once more that the lag phase is a complex process that requires a more complete definition. This will be possible only after the phenomena governing the population dynamics at an individual level of description, and occurring during the lag and exponential growth phases, are well understood.
Trajectory prediction of saccadic eye movements using a compressed exponential model
Han, Peng; Saunders, Daniel R.; Woods, Russell L.; Luo, Gang
2013-01-01
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays. PMID:23902753
Shear-induced conformational ordering, relaxation, and crystallization of isotactic polypropylene.
An, Haining; Li, Xiangyang; Geng, Yong; Wang, Yunlong; Wang, Xiao; Li, Liangbin; Li, Zhongming; Yang, Chuanlu
2008-10-02
The shear-induced coil-helix transition of isotactic polypropylene (iPP) has been studied with time-resolved Fourier transform infrared spectroscopy at various temperatures. The effects of temperature, shear rate, and strain on the coil-helix transition were studied systematically. The induced conformational order increases with the shear rate and strain. A threshold of shear strain is required to induce conformational ordering. High temperature reduces the effect of shear on the conformational order, though a simple correlation was not found. Following the shear-induced conformational ordering, relaxation of helices occurs, which follows the first-order exponential decay at temperatures well above the normal melting point of iPP. The relaxation time versus temperature is fitted with an Arrhenius law, which generates an activation energy of 135 kJ/mol for the helix-coil transition of iPP. At temperatures around the normal melting point, two exponential decays are needed to fit well on the relaxation kinetic of helices. This suggests that two different states of helices are induced by shear: (i) isolated single helices far away from each other without interactions, which have a fast relaxation kinetic; (ii) aggregations of helices or helical bundles with strong interactions among each other, which have a much slower relaxation process. The helical bundles are assumed to be the precursors of nuclei for crystallization. The different helix concentrations and distributions are the origin of the three different processes of crystallization after shear. The correlation between the shear-induced conformational order and crystallization is discussed.
Evidence for Two Neutrino Bursts from SN1987A
NASA Astrophysics Data System (ADS)
Valentim, Rodolfo; Horvath, Jorge E.; Rangel, Eraldo M.
The SN1987A in the Giant Magellanic Cloud was an amazing and extraordinary event because it was detected in real time for different neutrinos experiments (νs) around the world. Approximate ˜ 25 events were observed in three different experiments: Kamiokande II (KII) ˜ 12, Irvine-Michigan-Brookhaven (IMB) ˜ 8 e Baksan ˜ 5, plus a contrived burst at Mont Blanc (Liquid Scintillator Detector - LSD) later dismissed because of energetic requirements (Aglietta et al. 1988). The neutrinos have an important play role into the neutron star newborn: at the moment when the supernova explodes the compact object remnant is freezing by neutrinos ( ˜ 99% energy is lost in the few seconds of the explosion). The work is motivated by neutrinos’ event in relation arrival times where there is a temporal gap between set of events ( ˜ 6s). The first part of dataset came from the ordinary mechanism of freezing and the second part suggests different mechanism of neutrinos production. We tested two models of cooling for neutrinos from SN1987A: 1st an exponential cooling is an ordinary model of cooling and 2nd a two-step temperature model that it considers two bursts separated with temporal gap. Our analysis was done with Bayesian tools (Bayesian Information Criterion - BIC) The result showed strong evidence in favor of a two-step model against one single exponential cooling (ln Bij > 5.0), and suggests the existence of two neutrino bursts at the moment the neutron star was born.
Research of the key technology in satellite communication networks
NASA Astrophysics Data System (ADS)
Zeng, Yuan
2018-02-01
According to the prediction, in the next 10 years the wireless data traffic will be increased by 500-1000 times. Not only the wireless data traffic will be increased exponentially, and the demand for diversified traffic will be increased. Higher requirements for future mobile wireless communication system had brought huge market space for satellite communication system. At the same time, the space information networks had been greatly developed with the depth of human exploration of space activities, the development of space application, the expansion of military and civilian application. The core of spatial information networks is the satellite communication. The dissertation presented the communication system architecture, the communication protocol, the routing strategy, switch scheduling algorithm and the handoff strategy based on the satellite communication system. We built the simulation platform of the LEO satellites networks and simulated the key technology using OPNET.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
A fractal process of hydrogen diffusion in a-Si:H with exponential energy distribution
NASA Astrophysics Data System (ADS)
Hikita, Harumi; Ishikawa, Hirohisa; Morigaki, Kazuo
2017-04-01
Hydrogen diffusion in a-Si:H with exponential distribution of the states in energy exhibits the fractal structure. It is shown that a probability P(t) of the pausing time t has a form of tα (α: fractal dimension). It is shown that the fractal dimension α = Tr/T0 (Tr: hydrogen temperature, T0: a temperature corresponding to the width of exponential distribution of the states in energy) is in agreement with the Hausdorff dimension. A fractal graph for the case of α ≤ 1 is like the Cantor set. A fractal graph for the case of α > 1 is like the Koch curves. At α = ∞, hydrogen migration exhibits Brownian motion. Hydrogen diffusion in a-Si:H should be the fractal process.
Photoluminescence study of MBE grown InGaN with intentional indium segregation
NASA Astrophysics Data System (ADS)
Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan
2005-05-01
Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.
Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential
NASA Astrophysics Data System (ADS)
Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola
2017-01-01
We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
Energy Considerations of Hypothetical Space Drives
NASA Technical Reports Server (NTRS)
Millis, Marc G.
2007-01-01
The energy requirements of hypothetical, propellant-less space drives are compared to rockets. This serves to provide introductory estimates for potential benefits and to suggest analytical approaches for further study. A "space drive" is defined as an idealized form of propulsion that converts stored potential energy directly into kinetic energy using only the interactions between the spacecraft and its surrounding space. For Earth-to-orbit, the space drive uses 3.7 times less energy. For deep space travel, energy is proportional to the square of delta-v, whereas rocket energy scales exponentially. This has the effect of rendering a space drive 150-orders-of-magnitude better than a 17,000-s Specific Impulse rocket for sending a modest 5000 kg probe to traverse 5 ly in 50 years. Indefinite levitation, which is impossible for a rocket, could conceivably require 62 MJ/kg for a space drive. Assumption sensitivities and further analysis options are offered to guide further inquires.
Minimal conditions for the existence of a Hawking-like flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barcelo, Carlos; Liberati, Stefano; Sonego, Sebastiano
2011-02-15
We investigate the minimal conditions that an asymptotically flat general relativistic spacetime must satisfy in order for a Hawking-like Planckian flux of particles to arrive at future null infinity. We demonstrate that there is no requirement that any sort of horizon form anywhere in the spacetime. We find that the irreducible core requirement is encoded in an approximately exponential 'peeling' relationship between affine coordinates on past and future null infinity. As long as a suitable adiabaticity condition holds, then a Planck-distributed Hawking-like flux will arrive at future null infinity with temperature determined by the e-folding properties of the outgoing nullmore » geodesics. The temperature of the Hawking-like flux can slowly evolve as a function of time. We also show that the notion of peeling of null geodesics is distinct from the usual notion of 'inaffinity' used in Hawking's definition of surface gravity.« less
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
NASA Astrophysics Data System (ADS)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...
2017-10-24
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less
Observing in space and time the ephemeral nucleation of liquid-to-crystal phase transitions.
Yoo, Byung-Kuk; Kwon, Oh-Hoon; Liu, Haihua; Tang, Jau; Zewail, Ahmed H
2015-10-19
The phase transition of crystalline ordering is a general phenomenon, but its evolution in space and time requires microscopic probes for visualization. Here we report direct imaging of the transformation of amorphous titanium dioxide nanofilm, from the liquid state, passing through the nucleation step and finally to the ordered crystal phase. Single-pulse transient diffraction profiles at different times provide the structural transformation and the specific degree of crystallinity (η) in the evolution process. It is found that the temporal behaviour of η exhibits unique 'two-step' dynamics, with a robust 'plateau' that extends over a microsecond; the rate constants vary by two orders of magnitude. Such behaviour reflects the presence of intermediate structure(s) that are the precursor of the ordered crystal state. Theoretically, we extend the well-known Johnson-Mehl-Avrami-Kolmogorov equation, which describes the isothermal process with a stretched-exponential function, but here over the range of times covering the melt-to-crystal transformation.
Novel Bioluminescent Quantitative Detection of Nucleic Acid Amplification in Real-Time
Gandelman, Olga A.; Church, Vicki L.; Moore, Cathy A.; Kiddle, Guy; Carne, Christopher A.; Parmar, Surendra; Jalal, Hamid; Tisi, Laurence C.; Murray, James A. H.
2010-01-01
Background The real-time monitoring of polynucleotide amplification is at the core of most molecular assays. This conventionally relies on fluorescent detection of the amplicon produced, requiring complex and costly hardware, often restricting it to specialised laboratories. Principal Findings Here we report the first real-time, closed-tube luminescent reporter system for nucleic acid amplification technologies (NAATs) enabling the progress of amplification to be continuously monitored using simple light measuring equipment. The Bioluminescent Assay in Real-Time (BART) continuously reports through bioluminescent output the exponential increase of inorganic pyrophosphate (PPi) produced during the isothermal amplification of a specific nucleic acid target. BART relies on the coupled conversion of inorganic pyrophosphate (PPi) produced stoichiometrically during nucleic acid synthesis to ATP by the enzyme ATP sulfurylase, and can therefore be coupled to a wide range of isothermal NAATs. During nucleic acid amplification, enzymatic conversion of PPi released during DNA synthesis into ATP is continuously monitored through the bioluminescence generated by thermostable firefly luciferase. The assay shows a unique kinetic signature for nucleic acid amplifications with a readily identifiable light output peak, whose timing is proportional to the concentration of original target nucleic acid. This allows qualitative and quantitative analysis of specific targets, and readily differentiates between negative and positive samples. Since quantitation in BART is based on determination of time-to-peak rather than absolute intensity of light emission, complex or highly sensitive light detectors are not required. Conclusions The combined chemistries of the BART reporter and amplification require only a constant temperature maintained by a heating block and are shown to be robust in the analysis of clinical samples. Since monitoring the BART reaction requires only a simple light detector, the iNAAT-BART combination is ideal for molecular diagnostic assays in both laboratory and low resource settings. PMID:21152399
Ferrario, Mariana I; Guerrero, Sandra N
The purpose of this study was to analyze the response of different initial contamination levels of Alicyclobacillus acidoterrestris ATCC 49025 spores in apple juice as affected by pulsed light treatment (PL, batch mode, xenon lamp, 3pulses/s, 0-71.6J/cm 2 ). Biphasic and Weibull frequency distribution models were used to characterize the relationship between inoculum size and treatment time with the reductions achieved after PL exposure. Additionally, a second order polynomial model was computed to relate required PL processing time to inoculum size and requested log reductions. PL treatment caused up to 3.0-3.5 log reductions, depending on the initial inoculum size. Inactivation curves corresponding to PL-treated samples were adequately characterized by both Weibull and biphasic models (R adj 2 94-96%), and revealed that lower initial inoculum sizes were associated with higher inactivation rates. According to the polynomial model, the predicted time for PL treatment increased exponentially with inoculum size. Copyright © 2017 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations
NASA Astrophysics Data System (ADS)
Merkel, M.; Niyonzima, I.; Schöps, S.
2017-12-01
Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.
Wang, Lianwen; Li, Jiangong; Fecht, Hans-Jörg
2010-11-17
The reported relaxation time for several typical glass-forming liquids was analyzed by using a kinetic model for liquids which invoked a new kind of atomic cooperativity--thermodynamic cooperativity. The broadly studied 'cooperative length' was recognized as the kinetic cooperativity. Both cooperativities were conveniently quantified from the measured relaxation data. A single-exponential activation behavior was uncovered behind the super-Arrhenius relaxations for the liquids investigated. Hence the mesostructure of these liquids and the atomic mechanism of the glass transition became clearer.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
Jia, Xianbo; Lin, Xinjian; Chen, Jichen
2017-11-02
Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.
Spontaneous emergence of catalytic cycles with colloidal spheres
NASA Astrophysics Data System (ADS)
Zeravcic, Zorana; Brenner, Michael P.
2017-04-01
Colloidal particles endowed with specific time-dependent interactions are a promising route for realizing artificial materials that have the properties of living ones. Previous work has demonstrated how this system can give rise to self-replication. Here, we introduce the process of colloidal catalysis, in which clusters of particles catalyze the creation of other clusters through templating reactions. Surprisingly, we find that simple templating rules generically lead to the production of huge numbers of clusters. The templating reactions among this sea of clusters give rise to an exponentially growing catalytic cycle, a specific realization of Dyson’s notion of an exponentially growing metabolism. We demonstrate this behavior with a fixed set of interactions between particles chosen to allow a catalysis of a specific six-particle cluster from a specific seven-particle cluster, yet giving rise to the catalytic production of a sea of clusters of sizes between 2 and 11 particles. The fact that an exponentially growing cycle emerges naturally from such a simple scheme demonstrates that the emergence of exponentially growing metabolisms could be simpler than previously imagined.
Abstract-Reasoning Software for Coordinating Multiple Agents
NASA Technical Reports Server (NTRS)
Clement, Bradley; Barrett, Anthony; Rabideau, Gregg; Knight, Russell
2003-01-01
A computer program for scheduling the activities of multiple agents that share limited resources has been incorporated into the Automated Scheduling and Planning Environment (ASPEN) software system, aspects of which have been reported in several previous NASA Tech Briefs articles. In the original intended application, the agents would be multiple spacecraft and/or robotic vehicles engaged in scientific exploration of distant planets. The program could also be used on Earth in such diverse settings as production lines and military maneuvers. This program includes a planning/scheduling subprogram of the iterative repair type that reasons about the activities of multiple agents at abstract levels in order to greatly improve the scheduling of their use of shared resources. The program summarizes the information about the constraints on, and resource requirements of, abstract activities on the basis of the constraints and requirements that pertain to their potential refinements (decomposition into less-abstract and ultimately to primitive activities). The advantage of reasoning about summary information is that time needed to find consistent schedules is exponentially smaller than the time that would be needed for reasoning about the same tasks at the primitive level.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
Practical pulse engineering: Gradient ascent without matrix exponentiation
NASA Astrophysics Data System (ADS)
Bhole, Gaurav; Jones, Jonathan A.
2018-06-01
Since 2005, there has been a huge growth in the use of engineered control pulses to perform desired quantum operations in systems such as nuclear magnetic resonance quantum information processors. These approaches, which build on the original gradient ascent pulse engineering algorithm, remain computationally intensive because of the need to calculate matrix exponentials for each time step in the control pulse. In this study, we discuss how the propagators for each time step can be approximated using the Trotter-Suzuki formula, and a further speedup achieved by avoiding unnecessary operations. The resulting procedure can provide substantial speed gain with negligible costs in the propagator error, providing a more practical approach to pulse engineering.
Gong, Shuqing; Yang, Shaofu; Guo, Zhenyuan; Huang, Tingwen
2018-06-01
The paper is concerned with the synchronization problem of inertial memristive neural networks with time-varying delay. First, by choosing a proper variable substitution, inertial memristive neural networks described by second-order differential equations can be transformed into first-order differential equations. Then, a novel controller with a linear diffusive term and discontinuous sign term is designed. By using the controller, the sufficient conditions for assuring the global exponential synchronization of the derive and response neural networks are derived based on Lyapunov stability theory and some inequality techniques. Finally, several numerical simulations are provided to substantiate the effectiveness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.
Parametric resonant triad interactions in a free shear layer
NASA Technical Reports Server (NTRS)
Mallier, R.; Maslowe, S. A.
1993-01-01
We investigate the weakly nonlinear evolution of a triad of nearly-neutral modes superimposed on a mixing layer with velocity profile u bar equals Um + tanh y. The perturbation consists of a plane wave and a pair of oblique waves each inclined at approximately 60 degrees to the mean flow direction. Because the evolution occurs on a relatively fast time scale, the critical layer dynamics dominate the process and the amplitude evolution of the oblique waves is governed by an integro-differential equation. The long-time solution of this equation predicts very rapid (exponential of an exponential) amplification and we discuss the pertinence of this result to vortex pairing phenomena in mixing layers.
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
Growth and differentiation of human lens epithelial cells in vitro on matrix
NASA Technical Reports Server (NTRS)
Blakely, E. A.; Bjornstad, K. A.; Chang, P. Y.; McNamara, M. P.; Chang, E.; Aragon, G.; Lin, S. P.; Lui, G.; Polansky, J. R.
2000-01-01
PURPOSE: To characterize the growth and maturation of nonimmortalized human lens epithelial (HLE) cells grown in vitro. METHODS: HLE cells, established from 18-week prenatal lenses, were maintained on bovine corneal endothelial (BCE) extracellular matrix (ECM) in medium supplemented with basic fibroblast growth factor (FGF-2). The identity, growth, and differentiation of the cultures were characterized by karyotyping, cell morphology, and growth kinetics studies, reverse transcription-polymerase chain reaction (RT-PCR), immunofluorescence, and Western blot analysis. RESULTS: HLE cells had a male, human diploid (2N = 46) karyotype. The population-doubling time of exponentially growing cells was 24 hours. After 15 days in culture, cell morphology changed, and lentoid formation was evident. Reverse transcription-polymerase chain reaction (RT-PCR) indicated expression of alphaA- and betaB2-crystallin, fibroblast growth factor receptor 1 (FGFR1), and major intrinsic protein (MIP26) in exponential growth. Western analyses of protein extracts show positive expression of three immunologically distinct classes of crystallin proteins (alphaA-, alphaB-, and betaB2-crystallin) with time in culture. By Western blot analysis, expression of p57(KIP2), a known marker of terminally differentiated fiber cells, was detectable in exponential cultures, and levels increased after confluence. MIP26 and gamma-crystallin protein expression was detected in confluent cultures, by using immunofluorescence, but not in exponentially growing cells. CONCLUSIONS: HLE cells can be maintained for up to 4 months on ECM derived from BCE cells in medium containing FGF-2. With time in culture, the cells demonstrate morphologic characteristics of, and express protein markers for, lens fiber cell differentiation. This in vitro model will be useful for investigations of radiation-induced cataractogenesis and other studies of lens toxicity.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
Lu, Binglong; Jiang, Haijun; Hu, Cheng; Abdurahman, Abdujelil
2018-05-04
The exponential synchronization of hybrid coupled reaction-diffusion neural networks with time delays is discussed in this article. At first, a generalized intermittent control with spacial sampled-data is introduced, which is intermittent in time and data sampling in space. This type of control strategy not only can unify the traditional periodic intermittent control and the aperiodic case, but also can lower the update rate of the controller in both temporal and spatial domains. Next, based on the designed control protocol and the Lyapunov-Krasovskii functional approach, some novel and readily verified criteria are established to guarantee the exponential synchronization of the considered networks. These criteria depend on the diffusion coefficients, coupled strengths, time delays as well as control parameters. Finally, the effectiveness of the proposed control strategy is shown by a numerical example. Copyright © 2018 Elsevier Ltd. All rights reserved.
Contact Time in Random Walk and Random Waypoint: Dichotomy in Tail Distribution
NASA Astrophysics Data System (ADS)
Zhao, Chen; Sichitiu, Mihail L.
Contact time (or link duration) is a fundamental factor that affects performance in Mobile Ad Hoc Networks. Previous research on theoretical analysis of contact time distribution for random walk models (RW) assume that the contact events can be modeled as either consecutive random walks or direct traversals, which are two extreme cases of random walk, thus with two different conclusions. In this paper we conduct a comprehensive research on this topic in the hope of bridging the gap between the two extremes. The conclusions from the two extreme cases will result in a power-law or exponential tail in the contact time distribution, respectively. However, we show that the actual distribution will vary between the two extremes: a power-law-sub-exponential dichotomy, whose transition point depends on the average flight duration. Through simulation results we show that such conclusion also applies to random waypoint.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Gómez Pueyo, Adrián; Marques, Miguel A L; Rubio, Angel; Castro, Alberto
2018-05-09
We examine various integration schemes for the time-dependent Kohn-Sham equations. Contrary to the time-dependent Schrödinger's equation, this set of equations is nonlinear, due to the dependence of the Hamiltonian on the electronic density. We discuss some of their exact properties, and in particular their symplectic structure. Four different families of propagators are considered, specifically the linear multistep, Runge-Kutta, exponential Runge-Kutta, and the commutator-free Magnus schemes. These have been chosen because they have been largely ignored in the past for time-dependent electronic structure calculations. The performance is analyzed in terms of cost-versus-accuracy. The clear winner, in terms of robustness, simplicity, and efficiency is a simplified version of a fourth-order commutator-free Magnus integrator. However, in some specific cases, other propagators, such as some implicit versions of the multistep methods, may be useful.
Analytical model of coincidence resolving time in TOF-PET
NASA Astrophysics Data System (ADS)
Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.
2016-06-01
The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains
Meyer, Denny; Forbes, Don; Clarke, Stephen R.
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.
Meyer, Denny; Forbes, Don; Clarke, Stephen R
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Rapid Acceleration of a Coronal Mass Ejection in the Low Corona and Implications of Propagation
NASA Technical Reports Server (NTRS)
Gallagher, Peter T.; Lawrence, Gareth R.; Dennis, Brian R.
2003-01-01
A high-velocity Coronal Mass Ejection (CME) associated with the 2002 April 21 X1.5 flare is studied using a unique set of observations from the Transition Region and Coronal Explorer (TRACE), the Ultraviolet Coronagraph Spectrometer (UVCS), and the Large-Angle Spectrometric Coronagraph (LASCO). The event is first observed as a rapid rise in GOES X-rays, followed by simultaneous conjugate footpoint brightenings connected by an ascending loop or flux-rope feature. While expanding, the appearance of the feature remains remarkably constant as it passes through the TRACE 195 A passband and LASCO fields-of-view, allowing its height-time behavior to be accurately determined. An analytic function, having exponential and linear components, is found to represent the height-time evolution of the CME in the range 1.05-26 R. The CME acceleration rises exponentially to approx. 900 km/sq s within approximately 20-min, peaking at approx.1400 m/sq s when the leading edge is at approx. 1.7 R. The acceleration subsequently falls off as a slowly varying exponential for approx.,90-min. At distances beyond approx. 3.4 R, the height-time profile is approximately linear with a constant velocity of approx. 2400 km/s. These results are briefly discussed in light of recent kinematic models of CMEs.
Huang, Mengqi; Zhou, Xiaoming; Wang, Huiying; Xing, Da
2018-02-06
A novel CRISPR/Cas9 triggered isothermal exponential amplification reaction (CAS-EXPAR) strategy based on CRISPR/Cas9 cleavage and nicking endonuclease (NEase) mediated nucleic acids amplification was developed for rapid and site-specific nucleic acid detection. CAS-EXPAR was primed by the target DNA fragment produced by cleavage of CRISPR/Cas9, and the amplification reaction performed cyclically to generate a large number of DNA replicates which were detected using a real-time fluorescence monitoring method. This strategy that combines the advantages of CRISPR/Cas9 and exponential amplification showed high specificity as well as rapid amplification kinetics. Unlike conventional nucleic acids amplification reactions, CAS-EXPAR does not require exogenous primers, which often cause target-independent amplification. Instead, primers are first generated by Cas9/sgRNA directed site-specific cleavage of target and accumulated during the reaction. It was demonstrated this strategy gave a detection limit of 0.82 amol and showed excellent specificity in discriminating single-base mismatch. Moreover, the applicability of this method to detect DNA methylation and L. monocytogenes total RNA was also verified. Therefore, CAS-EXPAR may provide a new paradigm for efficient nucleic acid amplification and hold the potential for molecular diagnostic applications.
Honrado, Carlos; Dong, Tao
2014-01-01
Incidence of urinary tract infections (UTIs) is the second highest among all infections; thus, there is a high demand for bacteriuria detection. Escherichia coli are the main cause of UTIs, with microscopy methods and urine culture being the detection standard of these bacteria. However, the urine sampling and analysis required for these methods can be both time-consuming and complex. This work proposes a capacitive touch screen sensor (CTSS) concept as feasible alternative for a portable UTI detection device. Finite element method (FEM) simulations were conducted with a CTSS model. An exponential response of the model to increasing amounts of E. coli and liquid samples was observed. A measurable capacitance change due to E. coli presence and a tangible difference in the response given to urine and water samples were also detected. Preliminary experimental studies were also conducted on a commercial CTSS using liquid solutions with increasing amounts of dissolved ions. The CTSS was capable of distinguishing different volumes of liquids, also giving an exponential response. Furthermore, the CTSS gave higher responses to solutions with a superior amount of ions. Urine samples gave the top response among tested liquids. Thus, the CTSS showed the capability to differentiate solutions by their ionic content. PMID:25196109
2017-01-01
Cell size distribution is highly reproducible, whereas the size of individual cells often varies greatly within a tissue. This is obvious in a population of Arabidopsis thaliana leaf epidermal cells, which ranged from 1,000 to 10,000 μm2 in size. Endoreduplication is a specialized cell cycle in which nuclear genome size (ploidy) is doubled in the absence of cell division. Although epidermal cells require endoreduplication to enhance cellular expansion, the issue of whether this mechanism is sufficient for explaining cell size distribution remains unclear due to a lack of quantitative understanding linking the occurrence of endoreduplication with cell size diversity. Here, we addressed this question by quantitatively summarizing ploidy profile and cell size distribution using a simple theoretical framework. We first found that endoreduplication dynamics is a Poisson process through cellular maturation. This finding allowed us to construct a mathematical model to predict the time evolution of a ploidy profile with a single rate constant for endoreduplication occurrence in a given time. We reproduced experimentally measured ploidy profile in both wild-type leaf tissue and endoreduplication-related mutants with this analytical solution, further demonstrating the probabilistic property of endoreduplication. We next extended the mathematical model by incorporating the element that cell size is determined according to ploidy level to examine cell size distribution. This analysis revealed that cell size is exponentially enlarged 1.5 times every endoreduplication round. Because this theoretical simulation successfully recapitulated experimentally observed cell size distributions, we concluded that Poissonian endoreduplication dynamics and exponential size-boosting are the sources of the broad cell size distribution in epidermal tissue. More generally, this study contributes to a quantitative understanding whereby stochastic dynamics generate steady-state biological heterogeneity. PMID:28926847
Spencer, Richard G
2010-09-01
A type of "matched filter" (MF), used extensively in the processing of one-dimensional spectra, is defined by multiplication of a free-induction decay (FID) by a decaying exponential with the same time constant as that of the FID. This maximizes, in a sense to be defined, the signal-to-noise ratio (SNR) in the spectrum obtained after Fourier transformation. However, a different entity known also as the matched filter was introduced by van Vleck in the context of pulse detection in the 1940's and has become widely integrated into signal processing practice. These two types of matched filters appear to be quite distinct. In the NMR case, the "filter", that is, the exponential multiplication, is defined by the characteristics of, and applied to, a time domain signal in order to achieve improved SNR in the spectral domain. In signal processing, the filter is defined by the characteristics of a signal in the spectral domain, and applied in order to improve the SNR in the temporal (pulse) domain. We reconcile these two distinct implementations of the matched filter, demonstrating that the NMR "matched filter" is a special case of the matched filter more rigorously defined in the signal processing literature. In addition, two limitations in the use of the MF are highlighted. First, application of the MF distorts resonance ratios as defined by amplitudes, although not as defined by areas. Second, the MF maximizes SNR with respect to resonance amplitude, while intensities are often more appropriately defined by areas. Maximizing the SNR with respect to area requires a somewhat different approach to matched filtering.
Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone
2015-01-21
While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks.
Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone
2015-01-01
Background: While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. Methods: We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. Results: We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. Conclusions: The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks. PMID:25685633
In vivo chlorine and sodium MRI of rat brain at 21.1 T
Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.
2017-01-01
Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497
NASA Astrophysics Data System (ADS)
Bajargaan, Ruchi; Patel, Arvind
2018-04-01
One-dimensional unsteady adiabatic flow behind an exponential shock wave propagating in a self-gravitating, rotating, axisymmetric dusty gas with heat conduction and radiation heat flux, which has exponentially varying azimuthal and axial fluid velocities, is investigated. The shock wave is driven out by a piston moving with time according to an exponential law. The dusty gas is taken to be a mixture of a non-ideal gas and small solid particles. The density of the ambient medium is assumed to be constant. The equilibrium flow conditions are maintained and energy is varying exponentially, which is continuously supplied by the piston. The heat conduction is expressed in the terms of Fourier's law, and the radiation is assumed of diffusion type for an optically thick grey gas model. The thermal conductivity and the absorption coefficient are assumed to vary with temperature and density according to a power law. The effects of the variation of heat transfer parameters, gravitation parameter and dusty gas parameters on the shock strength, the distance between the piston and the shock front, and on the flow variables are studied out in detail. It is interesting to note that the similarity solution exists under the constant initial angular velocity, and the shock strength is independent from the self gravitation, heat conduction and radiation heat flux.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
NASA Astrophysics Data System (ADS)
Starn, J. J.; Belitz, K.; Carlson, C.
2017-12-01
Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold (exposure time), and the time until last exceedance; thus, the parameters of groundwater residence time are measures of the intrinsic susceptibility of groundwater to contamination.
TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.
exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization
Ahmad Khan, Junaid; Mustafa, M; Hayat, T; Alsaedi, A
2015-01-01
This work deals with the flow and heat transfer in upper-convected Maxwell fluid above an exponentially stretching surface. Cattaneo-Christov heat flux model is employed for the formulation of the energy equation. This model can predict the effects of thermal relaxation time on the boundary layer. Similarity approach is utilized to normalize the governing boundary layer equations. Local similarity solutions are achieved by shooting approach together with fourth-fifth-order Runge-Kutta integration technique and Newton's method. Our computations reveal that fluid temperature has inverse relationship with the thermal relaxation time. Further the fluid velocity is a decreasing function of the fluid relaxation time. A comparison of Fourier's law and the Cattaneo-Christov's law is also presented. Present attempt even in the case of Newtonian fluid is not yet available in the literature.
Rainbow net analysis of VAXcluster system availability
NASA Technical Reports Server (NTRS)
Johnson, Allen M., Jr.; Schoenfelder, Michael A.
1991-01-01
A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.
Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen
2010-05-01
The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Champion, Robby
2017-01-01
While designs of professional learning have expanded exponentially beyond the required-attendance inservice day workshop model, infusing meaningful data into daily decisions and conversations has not gained as much traction. Leaders of professional learning are increasingly coming under pressure to demonstrate that evidence is not an anathema in…
Resonator reset in circuit QED by optimal control for large open quantum systems
NASA Astrophysics Data System (ADS)
Boutin, Samuel; Andersen, Christian Kraglund; Venkatraman, Jayameenakshi; Ferris, Andrew J.; Blais, Alexandre
2017-10-01
We study an implementation of the open GRAPE (gradient ascent pulse engineering) algorithm well suited for large open quantum systems. While typical implementations of optimal control algorithms for open quantum systems rely on explicit matrix exponential calculations, our implementation avoids these operations, leading to a polynomial speedup of the open GRAPE algorithm in cases of interest. This speedup, as well as the reduced memory requirements of our implementation, are illustrated by comparison to a standard implementation of open GRAPE. As a practical example, we apply this open-system optimization method to active reset of a readout resonator in circuit QED. In this problem, the shape of a microwave pulse is optimized such as to empty the cavity from measurement photons as fast as possible. Using our open GRAPE implementation, we obtain pulse shapes, leading to a reset time over 4 times faster than passive reset.
Propensity and stickiness in the naming game: Tipping fractions of minorities
NASA Astrophysics Data System (ADS)
Thompson, Andrew M.; Szymanski, Boleslaw K.; Lim, Chjan C.
2014-10-01
Agent-based models of the binary naming game are generalized here to represent a family of models parameterized by the introduction of two continuous parameters. These parameters define varying listener-speaker interactions on the individual level with one parameter controlling the speaker and the other controlling the listener of each interaction. The major finding presented here is that the generalized naming game preserves the existence of critical thresholds for the size of committed minorities. Above such threshold, a committed minority causes a fast (in time logarithmic in size of the network) convergence to consensus, even when there are other parameters influencing the system. Below such threshold, reaching consensus requires time exponential in the size of the network. Moreover, the two introduced parameters cause bifurcations in the stabilities of the system's fixed points and may lead to changes in the system's consensus.
Symmetry-protected coherent relaxation of open quantum systems
NASA Astrophysics Data System (ADS)
van Caspel, Moos; Gritsev, Vladimir
2018-05-01
We compute the effect of Markovian bulk dephasing noise on the staggered magnetization of the spin-1/2 XXZ Heisenberg chain, as the system evolves after a Néel quench. For sufficiently weak system-bath coupling, the unitary dynamics are found to be preserved up to a single exponential damping factor. This is a consequence of the interplay between PT symmetry and weak symmetries, which strengthens previous predictions for PT -symmetric Liouvillian dynamics. Requirements are a nondegenerate PT -symmetric generator of time evolution L ̂, a weak parity symmetry, and an observable that is antisymmetric under this parity transformation. The spectrum of L ̂ then splits up into symmetry sectors, yielding the same decay rate for all modes that contribute to the observable's time evolution. This phenomenon may be realized in trapped ion experiments and has possible implications for the control of decoherence in out-of-equilibrium many-body systems.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Bîrlea, Sinziana I; Corley, Gavin J; Bîrlea, Nicolae M; Breen, Paul P; Quondamatteo, Fabio; OLaighin, Gearóid
2009-01-01
We propose a new method for extracting the electrical properties of human skin based on the time constant analysis of its exponential response to impulse stimulation. As a result of this analysis an adjacent finding has arisen. We have found that stratum corneum electroporation can be detected using this analysis method. We have observed that a one time-constant model is appropriate for describing the electrical properties of human skin at low amplitude applied voltages (<30V), and a two time-constant model best describes skin electrical properties at higher amplitude applied voltages (>30V). Higher voltage amplitudes (>30V) have been proven to create pores in the skin's stratum corneum which offer a new, lower resistance, pathway for the passage of current through the skin. Our data shows that when pores are formed in the stratum corneum they can be detected, in-vivo, due to the fact that a second time constant describes current flow through them.
Huang, Chuangxia; Cao, Jie; Cao, Jinde
2016-10-01
This paper addresses the exponential stability of switched cellular neural networks by using the mode-dependent average dwell time (MDADT) approach. This method is quite different from the traditional average dwell time (ADT) method in permitting each subsystem to have its own average dwell time. Detailed investigations have been carried out for two cases. One is that all subsystems are stable and the other is that stable subsystems coexist with unstable subsystems. By employing Lyapunov functionals, linear matrix inequalities (LMIs), Jessen-type inequality, Wirtinger-based inequality, reciprocally convex approach, we derived some novel and less conservative conditions on exponential stability of the networks. Comparing to ADT, the proposed MDADT show that the minimal dwell time of each subsystem is smaller and the switched system stabilizes faster. The obtained results extend and improve some existing ones. Moreover, the validness and effectiveness of these results are demonstrated through numerical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
Spatial evolution of quantum mechanical states
NASA Astrophysics Data System (ADS)
Christensen, N. D.; Unger, J. E.; Pinto, S.; Su, Q.; Grobe, R.
2018-02-01
The time-dependent Schrödinger equation is solved traditionally as an initial-time value problem, where its solution is obtained by the action of the unitary time-evolution propagator on the quantum state that is known at all spatial locations but only at t = 0. We generalize this approach by examining the spatial evolution from a state that is, by contrast, known at all times t, but only at one specific location. The corresponding spatial-evolution propagator turns out to be pseudo-unitary. In contrast to the real energies that govern the usual (unitary) time evolution, the spatial evolution can therefore require complex phases associated with dynamically relevant solutions that grow exponentially. By introducing a generalized scalar product, for which the spatial generator is Hermitian, one can show that the temporal integral over the probability current density is spatially conserved, in full analogy to the usual norm of the state, which is temporally conserved. As an application of the spatial propagation formalism, we introduce a spatial backtracking technique that permits us to reconstruct any quantum information about an atom from the ionization data measured at a detector outside the interaction region.
Scaling behavior of sleep-wake transitions across species
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene
2003-03-01
Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.
Beyond the usual mapping functions in GPS, VLBI and Deep Space tracking.
NASA Astrophysics Data System (ADS)
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie
2014-05-01
We describe here a new algorithm to model the water contents of the atmosphere (including ZWD) from GPS slant wet delays relative to a single receiver. We first make the assumption that the water vapor contents are mainly governed by a scale height (exponential law), and secondly that the departures from this decaying exponential can be mapped as a set of low degree 3D Zernike functions (w.r.t. space) and Tchebyshev polynomials (w.r.t. time.) We compare this new algorithm with previous algorithms known as mapping functions in GPS, VLBI and Deep Space tracking and give an example with data acquired over a one day time span at the Geodesy Observatory of Tahiti.
Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator
NASA Astrophysics Data System (ADS)
Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun
2014-02-01
We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
Creation of current filaments in the solar corona
NASA Technical Reports Server (NTRS)
Mikic, Z.; Schnack, D. D.; Van Hoven, G.
1989-01-01
It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.
Optimal digital dynamical decoupling for general decoherence via Walsh modulation
NASA Astrophysics Data System (ADS)
Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza
2017-11-01
We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1972-01-01
A two- or three-constraint, two-dimensional radiation shield weight optimization procedure and a computer program, DOPEX, is described. The DOPEX code uses the steepest descent method to alter a set of initial (input) thicknesses for a shield configuration to achieve a minimum weight while simultaneously satisfying dose constaints. The code assumes an exponential dose-shield thickness relation with parameters specified by the user. The code also assumes that dose rates in each principal direction are dependent only on thicknesses in that direction. Code input instructions, FORTRAN 4 listing, and a sample problem are given. Typical computer time required to optimize a seven-layer shield is about 0.1 minute on an IBM 7094-2.
A guidance and navigation system for continuous low-thrust vehicles. M.S. Thesis
NASA Technical Reports Server (NTRS)
Jack-Chingtse, C.
1973-01-01
A midcourse guidance and navigation system for continuous low thrust vehicles was developed. The equinoctial elements are the state variables. Uncertainties are modelled statistically by random vector and stochastic processes. The motion of the vehicle and the measurements are described by nonlinear stochastic differential and difference equations respectively. A minimum time trajectory is defined; equations of motion and measurements are linearized about this trajectory. An exponential cost criterion is constructed and a linear feedback quidance law is derived. An extended Kalman filter is used for state estimation. A short mission using this system is simulated. It is indicated that this system is efficient for short missions, but longer missions require accurate trajectory and ground based measurements.
An investigation of self-subtraction holography in LiNbO3
NASA Technical Reports Server (NTRS)
Vahey, D. W.; Kenan, R. P.; Hartman, N. F.; Sherman, R. C.
1981-01-01
A sample having self subtraction characteristics that were very promising was tested in depth: hologram formation times were on the order of 150 sec, the null signal was less than 2.5% of the peak signal, and no fatigue nor instability was detected over the span of the experiments. Another sample, fabricated with, at most, slight modifications did not perform nearly as well. In all samples, attempts to improve self subtraction characteristics by various thermal treatments had no effects or adverse effects, with one exception in which improvement was noted after a time delay of several days. A theory developed to describe self subtraction showed the observed decrease in beam intensity with time, but the shape of the predicted decay curve was oscillatory in contrast to the exponential like decay observed. The theory was also inadequate to account for the experimental sensitivity of self subtraction to the Bragg angle of the hologram. It is concluded that self subtraction is a viable method for optical processing systems requiring background discrimination.
Regularization techniques for backward--in--time evolutionary PDE problems
NASA Astrophysics Data System (ADS)
Gustafsson, Jonathan; Protas, Bartosz
2007-11-01
Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.
Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.
Abdelnour, Farras; Dayan, Michael; Devinsky, Orrin; Thesen, Thomas; Raj, Ashish
2018-05-15
How structural connectivity (SC) gives rise to functional connectivity (FC) is not fully understood. Here we mathematically derive a simple relationship between SC measured from diffusion tensor imaging, and FC from resting state fMRI. We establish that SC and FC are related via (structural) Laplacian spectra, whereby FC and SC share eigenvectors and their eigenvalues are exponentially related. This gives, for the first time, a simple and analytical relationship between the graph spectra of structural and functional networks. Laplacian eigenvectors are shown to be good predictors of functional eigenvectors and networks based on independent component analysis of functional time series. A small number of Laplacian eigenmodes are shown to be sufficient to reconstruct FC matrices, serving as basis functions. This approach is fast, and requires no time-consuming simulations. It was tested on two empirical SC/FC datasets, and was found to significantly outperform generative model simulations of coupled neural masses. Copyright © 2018. Published by Elsevier Inc.
Infinite capacity multi-server queue with second optional service channel
NASA Astrophysics Data System (ADS)
Ke, Jau-Chuan; Wu, Chia-Huang; Pearn, Wen Lea
2013-02-01
This paper deals with an infinite-capacity multi-server queueing system with a second optional service (SOS) channel. The inter-arrival times of arriving customers, the service times of the first essential service (FES) and the SOS channel are all exponentially distributed. A customer may leave the system after the FES channel with probability (1-θ), or at the completion of the FES may immediately require a SOS with probability θ (0 <= θ <= 1). The formulae for computing the rate matrix and stationary probabilities are derived by means of a matrix analytical approach. A cost model is developed to determine the optimal values of the number of servers and the two service rates, simultaneously, at the minimal total expected cost per unit time. Quasi-Newton method are employed to deal with the optimization problem. Under optimal operating conditions, numerical results are provided in which several system performance measures are calculated based on assumed numerical values of the system parameters.
Quantification of deviations from rationality with heavy tails in human dynamics
NASA Astrophysics Data System (ADS)
Maillart, T.; Sornette, D.; Frei, S.; Duebendorfer, T.; Saichev, A.
2011-05-01
The dynamics of technological, economic and social phenomena is controlled by how humans organize their daily tasks in response to both endogenous and exogenous stimulations. Queueing theory is believed to provide a generic answer to account for the often observed power-law distributions of waiting times before a task is fulfilled. However, the general validity of the power law and the nature of other regimes remain unsettled. Using anonymized data collected by Google at the World Wide Web level, we identify the existence of several additional regimes characterizing the time required for a population of Internet users to execute a given task after receiving a message. Depending on the under- or over-utilization of time by the population of users and the strength of their response to perturbations, the pure power law is found to be coextensive with an exponential regime (tasks are performed without too much delay) and with a crossover to an asymptotic plateau (some tasks are never performed).
On the theoretical aspects of improved fog detection and prediction in India
NASA Astrophysics Data System (ADS)
Dey, Sagnik
2018-04-01
The polluted Indo-Gangetic Basin (IGB) in northern India experiences fog (a condition when visibility degrades below 1 km) every winter (Dec-Jan) causing a massive loss of economy and even loss of life due to accidents. This can be minimized by improved fog detection (especially at night) and forecasting so that activities can be reorganized accordingly. Satellites detect fog at night by a positive brightness temperature difference (BTD). However, fixing the right BTD threshold holds the key to accuracy. Here I demonstrate the sensitivity of BTD in response to changes in fog and surface emissivity and their temperatures and justify a new BTD threshold. Further I quantify the dependence of critical fog droplet number concentration, NF (i.e. minimum fog concentration required to degrade visibility below 1 km) on liquid water content (LWC). NF decreases exponentially with an increase in LWC from 0.01 to 1 g/m3, beyond which it stabilizes. A 10 times low bias in simulated LWC below 1 g/m3 would require 107 times higher aerosol concentration to form the required number of fog droplets. These results provide the theoretical aspects that will help improving the existing fog detection algorithm and fog forecasting by numerical models in India.
Optimal iodine staining of cardiac tissue for X-ray computed tomography.
Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui
2014-01-01
X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.
NASA Astrophysics Data System (ADS)
Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.
2007-11-01
Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach has been justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatlands sites in Finland and a tundra site in Siberia. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. However, a rather large percentage of the exponential regression functions showed curvatures not consistent with the theoretical model which is considered to be caused by violations of the underlying model assumptions. Especially the effects of turbulence and pressure disturbances by the chamber deployment are suspected to have caused unexplainable curvatures. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes. The degree of underestimation increased with increasing CO2 flux strength and was dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir
The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.
Belkić, Dzevad
2006-12-21
This study deals with the most challenging numerical aspect for solving the quantification problem in magnetic resonance spectroscopy (MRS). The primary goal is to investigate whether it could be feasible to carry out a rigorous computation within finite arithmetics to reconstruct exactly all the machine accurate input spectral parameters of every resonance from a synthesized noiseless time signal. We also consider simulated time signals embedded in random Gaussian distributed noise of the level comparable to the weakest resonances in the corresponding spectrum. The present choice for this high-resolution task in MRS is the fast Padé transform (FPT). All the sought spectral parameters (complex frequencies and amplitudes) can unequivocally be reconstructed from a given input time signal by using the FPT. Moreover, the present computations demonstrate that the FPT can achieve the spectral convergence, which represents the exponential convergence rate as a function of the signal length for a fixed bandwidth. Such an extraordinary feature equips the FPT with the exemplary high-resolution capabilities that are, in fact, theoretically unlimited. This is illustrated in the present study by the exact reconstruction (within machine accuracy) of all the spectral parameters from an input time signal comprised of 25 harmonics, i.e. complex damped exponentials, including those for tightly overlapped and nearly degenerate resonances whose chemical shifts differ by an exceedingly small fraction of only 10(-11) ppm. Moreover, without exhausting even a quarter of the full signal length, the FPT is shown to retrieve exactly all the input spectral parameters defined with 12 digits of accuracy. Specifically, we demonstrate that when the FPT is close to the convergence region, an unprecedented phase transition occurs, since literally a few additional signal points are sufficient to reach the full 12 digit accuracy with the exponentially fast rate of convergence. This is the critical proof-of-principle for the high-resolution power of the FPT for machine accurate input data. Furthermore, it is proven that the FPT is also a highly reliable method for quantifying noise-corrupted time signals reminiscent of those encoded via MRS in clinical neuro-diagnostics.
NASA Astrophysics Data System (ADS)
Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim
2018-01-01
The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.
Real-time soil sensing based on fiber optics and spectroscopy
NASA Astrophysics Data System (ADS)
Li, Minzan
2005-08-01
Using NIR spectroscopic techniques, correlation analysis and regression analysis for soil parameter estimation was conducted with raw soil samples collected in a cornfield and a forage field. Soil parameters analyzed were soil moisture, soil organic matter, nitrate nitrogen, soil electrical conductivity and pH. Results showed that all soil parameters could be evaluated by NIR spectral reflectance. For soil moisture, a linear regression model was available at low moisture contents below 30 % db, while an exponential model can be used in a wide range of moisture content up to 100 % db. Nitrate nitrogen estimation required a multi-spectral exponential model and electrical conductivity could be evaluated by a single spectral regression. According to the result above mentioned, a real time soil sensor system based on fiber optics and spectroscopy was developed. The sensor system was composed of a soil subsoiler with four optical fiber probes, a spectrometer, and a control unit. Two optical fiber probes were used for illumination and the other two optical fiber probes for collecting soil reflectance from visible to NIR wavebands at depths around 30 cm. The spectrometer was used to obtain the spectra of reflected lights. The control unit consisted of a data logging device, a personal computer, and a pulse generator. The experiment showed that clear photo-spectral reflectance was obtained from the underground soil. The soil reflectance was equal to that obtained by the desktop spectrophotometer in laboratory tests. Using the spectral reflectance, the soil parameters, such as soil moisture, pH, EC and SOM, were evaluated.
NASA Astrophysics Data System (ADS)
Weiss, J. R.; Saunders, A.; Qiu, Q.; Foster, J. H.; Gomez, D.; Bevis, M. G.; Smalley, R., Jr.; Cimbaro, S.; Lenzano, L. E.; Barón, J.; Baez, J. C.; Echalar, A.; Avery, J.; Wright, T. J.
2017-12-01
We use a large regional network of continuous GPS sites to investigate postseismic deformation following the Mw 8.8 Maule and Mw 8.1 Pisagua earthquakes in Chile. Geodetic observations of surface displacements associated with megathrust earthquakes aid our understanding of the subduction zone earthquake cycle including postseismic processes such as afterslip and viscoelastic relaxation. The observations also help place constraints on the rheology and structure of the crust and upper mantle. We first empirically model the data and find that, while single-term logarithmic functions adequately fit the postseismic timeseries, they do a poor job of characterizing the rapid displacements in the days to weeks following the earthquakes. Combined exponential-logarithmic functions better capture the inferred near-field transition between afterslip and viscous relaxation, however displacements are best fit by three-term exponential functions with characteristic decay times of 15, 250, and 1500 days. Viscoelastic modeling of the velocity field and timeseries following the Maule earthquake suggests that the rheology is complex but is consistent with a 100-km-thick asthenosphere channel of viscosity 1018 Pa s sandwiched between a 40-km-thick elastic lid and a strong viscoelastic upper mantle. Variations in lid thickness of up to 40 km may be present and in some locations rapid deformation within the first months to years following the Maule event requires an even lower effective viscosity or a significant contribution from afterslip. We investigate this further by jointly inverting the GPS data for the time evolution of afterslip and viscous flow in the mantle wedge surrounding the Maule event.
NASA Astrophysics Data System (ADS)
Hardy, Neil; Dvir, Hila; Fenton, Flavio
Existing pacemakers consider the rectangular pulse to be the optimal form of stimulation current. However, other waveforms for the use of pacemakers could save energy while still stimulating the heart. We aim to find the optimal waveform for pacemaker use, and to offer a theoretical explanation for its advantage. Since the pacemaker battery is a charge source, here we probe the stimulation current waveforms with respect to the total charge delivery. In this talk we present theoretical analysis and numerical simulations of myocyte ion-channel currents acting as an additional source of charge that adds to the external stimulating charge for stimulation purposes. Therefore, we find that as the action potential emerges, the external stimulating current can be reduced accordingly exponentially. We then performed experimental studies in rabbit and cat hearts and showed that indeed exponential truncated pulses with less total charge can still induce activation in the heart. From the experiments, we present curves showing the savings in charge as a function of exponential waveform and we calculated that the longevity of the pacemaker battery would be ten times higher for the exponential current compared to the rectangular waveforms. Thanks to Petit Undergraduate Research Scholars Program and NSF# 1413037.
The shock waves in decaying supersonic turbulence
NASA Astrophysics Data System (ADS)
Smith, M. D.; Mac Low, M.-M.; Zuev, J. M.
2000-04-01
We here analyse numerical simulations of supersonic, hypersonic and magnetohydrodynamic turbulence that is free to decay. Our goals are to understand the dynamics of the decay and the characteristic properties of the shock waves produced. This will be useful for interpretation of observations of both motions in molecular clouds and sources of non-thermal radiation. We find that decaying hypersonic turbulence possesses an exponential tail of fast shocks and an exponential decay in time, i.e. the number of shocks is proportional to t exp (-ktv) for shock velocity jump v and mean initial wavenumber k. In contrast to the velocity gradients, the velocity Probability Distribution Function remains Gaussian with a more complex decay law. The energy is dissipated not by fast shocks but by a large number of low Mach number shocks. The power loss peaks near a low-speed turn-over in an exponential distribution. An analytical extension of the mapping closure technique is able to predict the basic decay features. Our analytic description of the distribution of shock strengths should prove useful for direct modeling of observable emission. We note that an exponential distribution of shocks such as we find will, in general, generate very low excitation shock signatures.
Hamed, Kaveh Akbari; Gregg, Robert D
2016-07-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg.
Hamed, Kaveh Akbari; Gregg, Robert D.
2016-01-01
This paper presents a systematic algorithm to design time-invariant decentralized feedback controllers to exponentially stabilize periodic orbits for a class of hybrid dynamical systems arising from bipedal walking. The algorithm assumes a class of parameterized and nonlinear decentralized feedback controllers which coordinate lower-dimensional hybrid subsystems based on a common phasing variable. The exponential stabilization problem is translated into an iterative sequence of optimization problems involving bilinear and linear matrix inequalities, which can be easily solved with available software packages. A set of sufficient conditions for the convergence of the iterative algorithm to a stabilizing decentralized feedback control solution is presented. The power of the algorithm is demonstrated by designing a set of local nonlinear controllers that cooperatively produce stable walking for a 3D autonomous biped with 9 degrees of freedom, 3 degrees of underactuation, and a decentralization scheme motivated by amputee locomotion with a transpelvic prosthetic leg. PMID:27990059
2D motility tracking of Pseudomonas putida KT2440 in growth phases using video microscopy
Davis, Michael L.; Mounteer, Leslie C.; Stevens, Lindsey K.; Miller, Charles D.; Zhou, Anhong
2011-01-01
Pseudomonas putida KT2440 is a gram negative motile soil bacterium important in bioremediation and biotechnology. Thus, it is important to understand its motility characteristics as individuals and in populations. Population characteristics were determined using a modified Gompertz model. Video microscopy and imaging software were utilized to analyze two dimensional (2D) bacteria movement tracks to quantify individual bacteria behavior. It was determined that inoculum density increased the lag time as seeding densities decreased, and that the maximum specific growth rate decreased as seeding densities increased. Average bacterial velocity remained relatively similar throughout exponential growth phase (~20.9 µm/sec), while maximum velocities peak early in exponential growth phase at a velocity of 51.2 µm/sec. Pseudomonas putida KT2440 also favor smaller turn angles indicating they often continue in the same direction after a change in flagella rotation throughout the exponential growth phase. PMID:21334971
Bounding entanglement spreading after a local quench
NASA Astrophysics Data System (ADS)
Drumond, Raphael C.; Móller, Natália S.
2017-06-01
We consider the variation of von Neumann entropy of subsystem reduced states of general many-body lattice spin systems due to local quantum quenches. We obtain Lieb-Robinson-like bounds that are independent of the subsystem volume. The main assumptions are that the Hamiltonian satisfies a Lieb-Robinson bound and that the volume of spheres on the lattice grows at most exponentially with their radius. More specifically, the bound exponentially increases with time but exponentially decreases with the distance between the subsystem and the region where the quench takes place. The fact that the bound is independent of the subsystem volume leads to stronger constraints (than previously known) on the propagation of information throughout many-body systems. In particular, it shows that bipartite entanglement satisfies an effective "light cone," regardless of system size. Further implications to t density-matrix renormalization-group simulations of quantum spin chains and limitations to the propagation of information are discussed.
Forecasting Financial Extremes: A Network Degree Measure of Super-Exponential Growth.
Yan, Wanfeng; van Tuyll van Serooskerken, Edgar
2015-01-01
Investors in stock market are usually greedy during bull markets and scared during bear markets. The greed or fear spreads across investors quickly. This is known as the herding effect, and often leads to a fast movement of stock prices. During such market regimes, stock prices change at a super-exponential rate and are normally followed by a trend reversal that corrects the previous overreaction. In this paper, we construct an indicator to measure the magnitude of the super-exponential growth of stock prices, by measuring the degree of the price network, generated from the price time series. Twelve major international stock indices have been investigated. Error diagram tests show that this new indicator has strong predictive power for financial extremes, both peaks and troughs. By varying the parameters used to construct the error diagram, we show the predictive power is very robust. The new indicator has a better performance than the LPPL pattern recognition indicator.
A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice
Dai, Junyi; Busemeyer, Jerome R.
2014-01-01
Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
A Fourier method for the analysis of exponential decay curves.
Provencher, S W
1976-01-01
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.
Stretched exponential dynamics of coupled logistic maps on a small-world network
NASA Astrophysics Data System (ADS)
Mahajan, Ashwini V.; Gade, Prashant M.
2018-02-01
We investigate the dynamic phase transition from partially or fully arrested state to spatiotemporal chaos in coupled logistic maps on a small-world network. Persistence of local variables in a coarse grained sense acts as an excellent order parameter to study this transition. We investigate the phase diagram by varying coupling strength and small-world rewiring probability p of nonlocal connections. The persistent region is a compact region bounded by two critical lines where band-merging crisis occurs. On one critical line, the persistent sites shows a nonexponential (stretched exponential) decay for all p while for another one, it shows crossover from nonexponential to exponential behavior as p → 1 . With an effectively antiferromagnetic coupling, coupling to two neighbors on either side leads to exchange frustration. Apart from exchange frustration, non-bipartite topology and nonlocal couplings in a small-world network could be a reason for anomalous relaxation. The distribution of trap times in asymptotic regime has a long tail as well. The dependence of temporal evolution of persistence on initial conditions is studied and a scaling form for persistence after waiting time is proposed. We present a simple possible model for this behavior.
Acetylcholine-induced current in perfused rat myoballs
1980-01-01
Spherical "myoballs" were grown under tissue culture conditions from striated muscle of neonatal rat thighs. The myoballs were examined electrophysiologically with a suction pipette which was used to pass current and perfuse internally. A microelectrode was used to record membrane potential. Experiments were performed with approximately symmetrical (intracellular and extracellular) sodium aspartate solutions. The resting potential, acetylcholine (ACh) reversal potential, and sodium channel reversal potential were all approximately 0 mV. ACh-induced currents were examined by use of both voltage jumps and voltage ramps in the presence of iontophoretically applied agonist. The voltage-jump relaxations had a single exponential time-course. The time constant, tau, was exponentially related to membrane potential, increasing e-fold for 81 mV hyperpolarization. The equilibrium current- voltage relationship was also approximately exponential, from -120 to +81 mV, increasing e-fold for 104 mV hyperpolarization. The data are consistent with a first-order gating process in which the channel opening rate constant is slightly voltage dependent. The instantaneous current-voltage relationship was sublinear in the hyperpolarizing direction. Several models are discussed which can account for the nonlinearity. Evidence is presented that the "selectivity filter" for the ACh channel is located near the intracellular membrane surface. PMID:7381423
An exponentiation method for XML element retrieval.
Wichaiwong, Tanakorn
2014-01-01
XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
Tang, Ze; Park, Ju H; Feng, Jianwen
2018-04-01
This paper is concerned with the exponential synchronization issue of nonidentically coupled neural networks with time-varying delay. Due to the parameter mismatch phenomena existed in neural networks, the problem of quasi-synchronization is thus discussed by applying some impulsive control strategies. Based on the definition of average impulsive interval and the extended comparison principle for impulsive systems, some criteria for achieving the quasi-synchronization of neural networks are derived. More extensive ranges of impulsive effects are discussed so that impulse could either play an effective role or play an adverse role in the final network synchronization. In addition, according to the extended formula for the variation of parameters with time-varying delay, precisely exponential convergence rates and quasi-synchronization errors are obtained, respectively, in view of different types impulsive effects. Finally, some numerical simulations with different types of impulsive effects are presented to illustrate the effectiveness of theoretical analysis.
NASA Astrophysics Data System (ADS)
Sabzikar, Farzad; Meerschaert, Mark M.; Chen, Jinghua
2015-07-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
Meerschaert, Mark M; Sabzikar, Farzad; Chen, Jinghua
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.
MEERSCHAERT, MARK M.; SABZIKAR, FARZAD; CHEN, JINGHUA
2014-01-01
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a tempered fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series. PMID:26085690
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabzikar, Farzad, E-mail: sabzika2@stt.msu.edu; Meerschaert, Mark M., E-mail: mcubed@stt.msu.edu; Chen, Jinghua, E-mail: cjhdzdz@163.com
2015-07-15
Fractional derivatives and integrals are convolutions with a power law. Multiplying by an exponential factor leads to tempered fractional derivatives and integrals. Tempered fractional diffusion equations, where the usual second derivative in space is replaced by a tempered fractional derivative, govern the limits of random walk models with an exponentially tempered power law jump distribution. The limiting tempered stable probability densities exhibit semi-heavy tails, which are commonly observed in finance. Tempered power law waiting times lead to tempered fractional time derivatives, which have proven useful in geophysics. The tempered fractional derivative or integral of a Brownian motion, called a temperedmore » fractional Brownian motion, can exhibit semi-long range dependence. The increments of this process, called tempered fractional Gaussian noise, provide a useful new stochastic model for wind speed data. A tempered fractional difference forms the basis for numerical methods to solve tempered fractional diffusion equations, and it also provides a useful new correlation model in time series.« less
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
ERIC Educational Resources Information Center
Miller, Cynthia Susan
2009-01-01
The study sought to determine if increased technology use affects the free-time choices of students. While technology options have grown exponentially, time remains a fixed commodity. Therefore, it is suggested that students who increasingly use technology must draw time from more traditional childhood activities. Students' free-time activities…
Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cross, R.J.
1985-12-01
A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less
Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M
2013-02-01
A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.
Vibronic relaxation dynamics of o-dichlorobenzene in its lowest excited singlet state
NASA Astrophysics Data System (ADS)
Liu, Benkang; Zhao, Haiyan; Lin, Xiang; Li, Xinxin; Gao, Mengmeng; Wang, Li; Wang, Wei
2018-01-01
Vibronic dynamics of o-dichlorobenzene in its lowest excited singlet state, S1, is investigated in real time by using femtosecond pump-probe method, combined with time-of-flight mass spectroscopy and photoelectron velocity mapping technique. Relaxation processes for the excitation in the range of 276-252 nm can be fitted by single exponential decay model, while in the case of wavelength shorter than 252 nm two-exponential decay model must be adopted for simulating transient profiles. Lifetime constants of the vibrationally excited S1 states change from 651 ± 10 ps for 276 nm excitation to 61 ± 1 ps for 242 nm excitation. Both the internal conversion from the S1 to the highly vibrationally excited ground state S0 and the intersystem crossing from the S1 to the triplet state are supposed to play important roles in de-excitation processes. Exponential fitting of the de-excitation rates on the excitation energy implies such de-excitation process starts from the highly vibrationally excited S0 state, which is validated, by probing the relaxation following photoexcitation at 281 nm, below the S1 origin. Time-dependent photoelectron kinetic energy distributions have been obtained experimentally. As the excitation wavelength changes from 276 nm to 242 nm, different cationic vibronic vibrations can be populated, determined by the Franck-Condon factors between the large geometry distorted excited singlet states and final cationic states.
Evo-SETI SCALE to measure Life on Exoplanets
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2016-04-01
Darwinian Evolution over the last 3.5 billion years was an increase in the number of living species from 1 (RNA?) to the current 50 million. This increasing trend in time looks like being exponential, but one may not assume an exactly exponential curve since many species went extinct in the past, even in mass extinctions. Thus, the simple exponential curve must be replaced by a stochastic process having an exponential mean value. Borrowing from financial mathematics (;Black-Scholes models;), this ;exponential; stochastic process is called Geometric Brownian Motion (GBM), and its probability density function (pdf) is a lognormal (not a Gaussian) (Proof: see ref. Maccone [3], Chapter 30, and ref. Maccone [4]). Lognormal also is the pdf of the statistical number of communicating ExtraTerrestrial (ET) civilizations in the Galaxy at a certain fixed time, like a snapshot: this result was found in 2008 by this author as his solution to the Statistical Drake Equation of SETI (Proof: see ref. Maccone [1]). Thus, the GBM of Darwinian Evolution may also be regarded as the extension in time of the Statistical Drake equation (Proof: see ref. Maccone [4]). But the key step ahead made by this author in his Evo-SETI (Evolution and SETI) mathematical model was to realize that LIFE also is just a b-lognormal in time: every living organism (a cell, a human, a civilization, even an ET civilization) is born at a certain time b (;birth;), grows up to a peak p (with an ascending inflexion point in between, a for adolescence), then declines from p to s (senility, i.e. descending inflexion point) and finally declines linearly and dies at a final instant d (death). In other words, the infinite tail of the b-lognormal was cut away and replaced by just a straight line between s and d, leading to simple mathematical formulae (;History Formulae;) allowing one to find this ;finite b-lognormal; when the three instants b, s, and d are assigned. Next the crucial Peak-Locus Theorem comes. It means that the GBM exponential may be regarded as the geometric locus of all the peaks of a one-parameter (i.e. the peak time p) family of b-lognormals. Since b-lognormals are pdf-s, the area under each of them always equals 1 (normalization condition) and so, going from left to right on the time axis, the b-lognormals become more and more ;peaky;, and so they last less and less in time. This is precisely what happened in human history: civilizations that lasted millennia (like Ancient Greece and Rome) lasted just centuries (like the Italian Renaissance and Portuguese, Spanish, French, British and USA Empires) but they were more and more advanced in the ;level of civilization;. This ;level of civilization; is what physicists call ENTROPY. Also, in refs. Maccone [3] and [4], this author proved that, for all GBMs, the (Shannon) Entropy of the b-lognormals in his Peak-Locus Theorem grows LINEARLY in time. The Molecular Clock, well known to geneticists since 50 years, shows that the DNA base-substitutions occur LINEARLY in time since they are neutral with respect to Darwinian selection. In simple words: DNA evolved by obeying the laws of quantum physics only (microscopic laws) and not by obeying assumed ;Darwinian selection laws; (macroscopic laws). This is Kimura's neutral theory of molecular evolution. The conclusion is that the Molecular Clock and the b-lognormal Entropy are the same thing. At last, we reach the new, original result justifying the publication of this paper. On exoplanets, molecular evolution is proceeding at about the same rate as it did proceed on Earth: rather independently of the physical conditions of the exoplanet, if the DNA had the possibility to evolve in water initially. Thus, Evo-Entropy, i.e. the (Shannon) Entropy of the generic b-lognormal of the Peak-Locus Theorem, provides the Evo-SETI SCALE to measure the evolution of life on exoplanets.
Peleg, Micha; Normand, Mark D
2015-09-01
When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.
Enviromental influences on the {sup 137}Cs kinetics of the yellow-bellied turtle (Trachemys Scripta)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, E.L.; Brisbin, L.I. Jr.
1996-02-01
Assessments of ecological risk require accurate predictions of contaminant dynamics in natural populations. However, simple deterministic models that assume constant uptake rates and elimination fractions may compromise both their ecological realism and their general application to animals with variable metabolism or diets. In particular, the temperature-dependent model of metabolic rates characteristic of ectotherms may lead to significant differences between observed and predicted contaminant kinetics. We examined the influence of a seasonally variable thermal environment on predicting the uptake and annual cycling of contaminants by ectotherms, using a temperature-dependent model of {sup 137}Cs kinetics in free-living yellow-bellied turtles, Trachemys scripta. Wemore » compared predictions from this model with those of deterministics negative exponential and flexibly shaped Richards sigmoidal models. Concentrations of {sup 137}Cs in a population if this species in Pond B, a radionuclide-contaminated nuclear reactor cooling reservoir, and {sup 137}Cs uptake by the uncontaminated turtles held captive in Pond B for 4 yr confirmed both the pattern of uptake and the equilibrium concentrations predicted by the temperature-dependent model. Almost 90% of the variance on the predicted time-integrated {sup 137}Cs concentration was explainable by linear relationships with model paramaters. The model was also relatively insensitive to uncertainties in the estimates of ambient temperature, suggesting that adequate estimates of temperature-dependent ingestion and elimination may require relatively few measurements of ambient conditions at sites of interest. Analyses of Richards sigmoidal models of {sup 137}Cs uptake indicated significant differences from a negative exponential trajectory in the 1st yr after the turtles` release into Pond B. 76 refs., 7 figs., 5 tabs.« less
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Liu, Gong Xin; Daut, Jürgen
2002-01-01
K+ channels of isolated guinea-pig cardiomyocytes were studied using the patch-clamp technique. At transmembrane potentials between −120 and −220 mV we observed inward currents through an apparently novel channel. The novel channel was strongly rectifying, no outward currents could be recorded. Between −200 and −160 mV it had a slope conductance of 42.8 ± 3.0 pS (s.d.; n = 96). The open probability (Po) showed a sigmoid voltage dependence and reached a maximum of 0.93 at −200 mV, half-maximal activation was approximately −150 mV. The voltage dependence of Po was not affected by application of 50 μm isoproterenol. The open-time distribution could be described by a single exponential function, the mean open time ranged between 73.5 ms at −220 mV and 1.4 ms at −160 mV. At least two exponential components were required to fit the closed time distribution. Experiments with different external Na+, K+ and Cl− concentrations suggested that the novel channel is K+ selective. Extracellular Ba2+ ions gave rise to a voltage-dependent reduction in Po by inducing long closed states; Cs+ markedly reduced mean open time at −200 mV. In cell-attached recordings the novel channel frequently converted to a classical inward rectifier channel, and vice versa. This conversion was not voltage dependent. After excision of the patch, the novel channel always converted to a classical inward rectifier channel within 0–3 min. This conversion was not affected by intracellular Mg2+, phosphatidylinositol (4,5)-bisphosphate or spermine. Taken together, our findings suggest that the novel K+ channel represents a different ‘mode’ of the classical inward rectifier channel in which opening occurs only at very negative potentials. PMID:11897847