SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Exponential Correlation of IQ and the Wealth of Nations
ERIC Educational Resources Information Center
Dickerson, Richard E.
2006-01-01
Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
NASA Astrophysics Data System (ADS)
Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu
2005-10-01
A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.
Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.
Kirby, Kris N; Santiesteban, Mariana
2003-01-01
Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.
Paul A. Murphy; Robert M. Farrar
1981-01-01
In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.
Cejnar, M; Kobler, H; Hunyor, S N
1993-03-01
Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.
2012-09-01
used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Deadline rush: a time management phenomenon and its mathematical description.
König, Cornelius J; Kleinmann, Martin
2005-01-01
A typical time management phenomenon is the rush before a deadline. Behavioral decision making research can be used to predict how behavior changes before a deadline. People are likely not to work on a project with a deadline in the far future because they generally discount future outcomes. Only when the deadline is close are people likely to work. On the basis of recent intertemporal choice experiments, the authors argue that a hyperbolic function should provide a more accurate description of the deadline rush than an exponential function predicted by an economic model of discounted utility. To show this, the fit of the hyperbolic and the exponential function were compared with data sets that describe when students study for exams. As predicted, the hyperbolic function fit the data significantly better than the exponential function. The implication for time management decisions is that they are most likely to be inconsistent over time (i.e., people make a plan how to use their time but do not follow it).
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Alternative analytical forms to model diatomic systems based on the deformed exponential function.
da Fonsêca, José Erinaldo; de Oliveira, Heibbe Cristhian B; da Cunha, Wiliam Ferreira; Gargano, Ricardo
2014-07-01
Using a deformed exponential function and the molecular-orbital theory for the simplest molecular ion, two new analytical functions are proposed to represent the potential energy of ground-state diatomic systems. The quality of these new forms was tested by fitting the ab initio electronic energies of the system LiH, LiNa, NaH, RbH, KH, H2, Li2, K2, H 2 (+) , BeH(+) and Li 2 (+) . From these fits, it was verified that these new proposals are able to adequately describe homonuclear, heteronuclear and cationic diatomic systems with good accuracy. Vibrational spectroscopic constant results obtained from these two proposals are in good agreement with experimental data.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie
2017-12-01
The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
Cole-Davidson dynamics of simple chain models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian
2008-10-01
Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
NASA Astrophysics Data System (ADS)
Brown, J. S.; Shaheen, S. E.
2018-04-01
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Brown, J S; Shaheen, S E
2018-04-04
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Amplitude, Latency, and Peak Velocity in Accommodation and Disaccommodation Dynamics
Papadatou, Eleni; Ferrer-Blasco, Teresa; Montés-Micó, Robert
2017-01-01
The aim of this work was to ascertain whether there are differences in amplitude, latency, and peak velocity of accommodation and disaccommodation responses when different analysis strategies are used to compute them, such as fitting different functions to the responses or for smoothing them prior to computing the parameters. Accommodation and disaccommodation responses from four subjects to pulse changes in demand were recorded by means of aberrometry. Three different strategies were followed to analyze such responses: fitting an exponential function to the experimental data; fitting a Boltzmann sigmoid function to the data; and smoothing the data. Amplitude, latency, and peak velocity of the responses were extracted. Significant differences were found between the peak velocity in accommodation computed by fitting an exponential function and smoothing the experimental data (mean difference 2.36 D/s). Regarding disaccommodation, significant differences were found between latency and peak velocity, calculated with the two same strategies (mean difference of 0.15 s and −3.56 D/s, resp.). The strategy used to analyze accommodation and disaccommodation responses seems to affect the parameters that describe accommodation and disaccommodation dynamics. These results highlight the importance of choosing the most adequate analysis strategy in each individual to obtain the parameters that characterize accommodation and disaccommodation dynamics. PMID:29226128
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains
NASA Astrophysics Data System (ADS)
de Wekker, Stephan F. J.; Whiteman, C. David
2006-06-01
Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
ERIC Educational Resources Information Center
Wolf, Walter A., Ed.
1976-01-01
Presents three activities: (1) the investigation of the purity and stability of nicotinamide and flavin coenzymes; (2) desk-computer fitting of a two-exponential function; and (3) an interesting and inexpensive solubility product experiment for introductory chemistry. (RH)
NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
Universality Classes of Interaction Structures for NK Fitness Landscapes
NASA Astrophysics Data System (ADS)
Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim
2018-07-01
Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.
Universality Classes of Interaction Structures for NK Fitness Landscapes
NASA Astrophysics Data System (ADS)
Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim
2018-02-01
Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
Plasma Heating in Solar Microflares: Statistics and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirichenko, A. S.; Bogachev, S. A.
2017-05-01
In this paper we present the results of an analysis of 481 weak solar flares, from A0.01 class flares to the B GOES class, that were observed during the period of extremely low solar activity from 2009 April to July. For all flares we measured the temperature of the plasma in the isothermal and two-temperature approximations and tried to fit its relationship with the X-ray class using exponential and power-law functions. We found that the whole temperature distribution in the range from A0.01 to X-class cannot be fit by one exponential function. The fitting for weak flares below A1.0 ismore » significantly steeper than that for medium and large flares. The power-law approximation seems to be more reliable: the corresponding functions were found to be in good agreement with experimental data both for microflares and for normal flares. Our study predicts that evidence of plasma heating can be found in flares starting from the A0.0002 X-ray class. Weaker events presumably cannot heat the surrounding plasma. We also estimated emission measures for all flares studied and the thermal energy for 113 events.« less
The acquisition of conditioned responding.
Harris, Justin A
2011-04-01
This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.
Klein, F.W.; Wright, Tim
2008-01-01
The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Spectral Modeling of the EGRET 3EG Gamma Ray Sources Near the Galactic Plane
NASA Technical Reports Server (NTRS)
Bertsch, D. L.; Hartman, R. C.; Hunter, S. D.; Thompson, D. J.; Lin, Y. C.; Kniffen, D. A.; Kanbach, G.; Mayer-Hasselwander, H. A.; Reimer, O.; Sreekumar, P.
1999-01-01
The third EGRET catalog lists 84 sources within 10 deg of the Galactic Plane. Five of these are well-known spin-powered pulsars, 2 and possibly 3 others are blazars, and the remaining 74 are classified as unidentified, although 6 of these are likely to be artifacts of nearby strong sources. Several of the remaining 68 unidentified sources have been noted as having positional agreement with supernovae remnants and OB associations. Others may be radio-quiet pulsars like Geminga, and still others may belong to a totally new class of sources. The question of the energy spectral distributions of these sources is an important clue to their identification. In this paper, the spectra of the sources within 10 deg of Galactic Plane are fit with three different functional forms; a single power law, two power laws, and a power law with an exponential cutoff. Where possible, the best fit is selected with statistical tests. Twelve, and possibly an additional 5 sources, are found to have spectra that are fit by a breaking power law or by the power law with exponential cutoff function.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees
NASA Astrophysics Data System (ADS)
Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard
2017-09-01
Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
betaFIT: A computer program to fit pointwise potentials to selected analytic functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Pashov, Asen
2017-01-01
This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.
Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism
NASA Astrophysics Data System (ADS)
Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.
2018-05-01
The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
From Experiment to Theory: What Can We Learn from Growth Curves?
Kareva, Irina; Karev, Georgy
2018-01-01
Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werhahn, Jasper C.; Akase, Dai; Xantheas, Sotiris S.
2014-08-14
The scaled versions of the newly introduced [S. S. Xantheas and J. C. Werhahn, J. Chem. Phys.141, 064117 (2014)] generalized forms of some popular potential energy functions (PEFs) describing intermolecular interactions – Mie, Lennard-Jones, Morse, and Buckingham exponential-6 – have been used to fit the ab initio relaxed approach paths and fixed approach paths for the halide-water, X -(H 2O), X = F, Cl, Br, I, and alkali metal-water, M +(H 2O), M = Li, Na, K, Rb, Cs, interactions. The generalized forms of those PEFs have an additional parameter with respect to the original forms and produce fits tomore » the ab initio data that are between one and two orders of magnitude better in the χ 2 than the original PEFs. They were found to describe both the long-range, minimum and repulsive wall of the respective potential energy surfaces quite accurately. Overall the 4-parameter extended Morse (eM) and generalized Buckingham exponential-6 (gBe-6) potentials were found to best fit the ab initio data for these two classes of ion-water interactions. Finally, the fitted values of the parameter of the (eM) and (gBe-6) PEFs that control the repulsive wall of the potential correlate remarkably well with the ionic radii of the halide and alkali metal ions.« less
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
NASA Astrophysics Data System (ADS)
Vogelsang, R.; Hoheisel, C.
1987-02-01
Molecular-dynamics (MD) calculations are reported for three thermodynamic states of a Lennard-Jones fluid. Systems of 2048 particles and 105 integration steps were used. The transverse current autocorrelation function, Ct(k,t), has been determined for wave vectors of the range 0.5<||k||σ<1.5. Ct(k,t) was fitted by hydrodynamic-type functions. The fits returned k-dependent decay times and shear viscosities which showed a systematic behavior as a function of k. Extrapolation to the hydrodynamic region at k=0 gave shear viscosity coefficients in good agreement with direct Green-Kubo results obtained in previous work. The two-exponential model fit for the memory function proposed by other authors does not provide a reasonable description of the MD results, as the fit parameters show no systematic wave-vector dependence, although the Ct(k,t) functions are somewhat better fitted. Similarly, the semiempirical interpolation formula for the decay time based on the viscoelastic concept proposed by Akcasu and Daniels fails to reproduce the correct k dependence for the wavelength range investigated herein.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
NASA Astrophysics Data System (ADS)
Hollett, Joshua W.; Pegoretti, Nicholas
2018-04-01
Separate, one-parameter, on-top density functionals are derived for the short-range dynamic correlation between opposite and parallel-spin electrons, in which the electron-electron cusp is represented by an exponential function. The combination of both functionals is referred to as the Opposite-spin exponential-cusp and Fermi-hole correction (OF) functional. The two parameters of the OF functional are set by fitting the ionization energies and electron affinities, of the atoms He to Ar, predicted by ROHF in combination with the OF functional to the experimental values. For ionization energies, the overall performance of ROHF-OF is better than completely renormalized coupled-cluster [CR-CC(2,3)] and better than, or as good as, conventional density functional methods. For electron affinities, the overall performance of ROHF-OF is less impressive. However, for both ionization energies and electron affinities of third row atoms, the mean absolute error of ROHF-OF is only 3 kJ mol-1.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Central Pb+Pb collisions at 158 A GeV/c studied by $$\\pi^-\\pi^-$$ interferometry
Aggarwal et al., M. M.
2000-05-18
Two-particle correlations have been measured for identifiedmore » $$\\pi^-$$ from central 158 A GeV Pb+Pb collisions and fitted radii of about 7 fm in all dimensions have been obtained. A multi-dimensional study of the radii as a function of k T is presented, including a full correction for the resolution effects of the apparatus. The cross term R 2 out-long of the standard fit in the Longitudinally CoMoving System (LCMS) and the v L parameter of the generalised Yano-Koonin fit are compatible with o, suggesting that the source undergoes a boost invariant expansion. The shapes of the correlation functions in Q inv and Q space = √Q$$2\\atop{x}$$ + Q$$2\\atop{y}$$ + Q$$2\\atop{z}$$ have been analyzed in detail. They are not Gaussian but better represented by exponentials. As a consequence fitting Gaussians to these correlation functions may produce different radii depending on the acceptance of the experimental setup used for the measurement.« less
Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G
2014-11-01
To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.
Interpreting the Weibull fitting parameters for diffusion-controlled release data
NASA Astrophysics Data System (ADS)
Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.
2017-11-01
We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Silicon Drift Detector response function for PIXE spectra fitting
NASA Astrophysics Data System (ADS)
Calzolai, G.; Tapinassi, S.; Chiari, M.; Giannoni, M.; Nava, S.; Pazzi, G.; Lucarelli, F.
2018-02-01
The correct determination of the X-ray peak areas in PIXE spectra by fitting with a computer program depends crucially on accurate parameterization of the detector peak response function. In the Guelph PIXE software package, GUPIXWin, one of the most used PIXE spectra analysis code, the response of a semiconductor detector to monochromatic X-ray radiation is described by a linear combination of several analytical functions: a Gaussian profile for the X-ray line itself, and additional tail contributions (exponential tails and step functions) on the low-energy side of the X-ray line to describe incomplete charge collection effects. The literature on the spectral response of silicon X-ray detectors for PIXE applications is rather scarce, in particular data for Silicon Drift Detectors (SDD) and for a large range of X-ray energies are missing. Using a set of analytical functions, the SDD response functions were satisfactorily reproduced for the X-ray energy range 1-15 keV. The behaviour of the parameters involved in the SDD tailing functions with X-ray energy is described by simple polynomial functions, which permit an easy implementation in PIXE spectra fitting codes.
A mathematical definition of the financial bubbles and crashes
NASA Astrophysics Data System (ADS)
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2007-09-01
We check the validity of the mathematical method of detecting financial bubbles or crashes, which is based on a data fitting with an exponential function. We show that the period of a bubble can be determined nearly uniquely independent of the precision of data. The method is widely applicable for stock market data such as the Internet bubble.
Spatial correlations and exact solution of the problem of the boson peak profile in amorphous media
NASA Astrophysics Data System (ADS)
Kirillov, Sviatoslav A.; A. Voyiatzis, George; Kolomiyets, Tatiana M.; H. Anastasiadis, Spiros
1999-11-01
Based on a model correlation function which covers spatial correlations from Gaussian to exponential, we have arrived at an exact analytic solution of the problem of the Boson peak profile in amorphous media. Probe fits made for polyisoprene and triacetin prove the working ability of the formulae obtained.
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir
The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.
A modified exponential behavioral economic demand model to better describe consumption data.
Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K
2015-12-01
Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Species area relationships in mediterranean-climate plant communities
Keeley, Jon E.; Fotheringham, C.J.
2003-01-01
Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.
Rounded stretched exponential for time relaxation functions.
Powles, J G; Heyes, D M; Rickayzen, G; Evans, W A B
2009-12-07
A rounded stretched exponential function is introduced, C(t)=exp{(tau(0)/tau(E))(beta)[1-(1+(t/tau(0))(2))(beta/2)]}, where t is time, and tau(0) and tau(E) are two relaxation times. This expression can be used to represent the relaxation function of many real dynamical processes, as at long times, t>tau(0), the function converges to a stretched exponential with normalizing relaxation time, tau(E), yet its expansion is even or symmetric in time, which is a statistical mechanical requirement. This expression fits well the shear stress relaxation function for model soft soft-sphere fluids near coexistence, with tau(E)
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Basis convergence of range-separated density-functional theory.
Franck, Odile; Mussard, Bastien; Luppi, Eleonora; Toulouse, Julien
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. We study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N2, and H2O) with cardinal number X of the Dunning basis sets cc - p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, W. Payton; Hokr, Milan; Shao, Hua
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Gardner, W. Payton; Hokr, Milan; Shao, Hua; ...
2016-10-19
We investigated the transit time distribution (TTD) of discharge collected from fractures in the Bedrichov Tunnel, Czech Republic, using lumped parameter models and multiple environmental tracers. We then utilize time series of δ 18O, δ 2H and 3H along with CFC measurements from individual fractures in the Bedrichov Tunnel of the Czech Republic to investigate the TTD, and the uncertainty in estimated mean travel time in several fracture networks of varying length and discharge. We also compare several TTDs, including the dispersion distribution, the exponential distribution, and a developed TTD which includes the effects of matrix diffusion. The effect ofmore » seasonal recharge is explored by comparing several seasonal weighting functions to derive the historical recharge concentration. We identify best fit mean ages for each TTD by minimizing the error-weighted, multi-tracer χ2 residual for each seasonal weighting function. We use this methodology to test the ability of each TTD and seasonal input function to fit the observed tracer concentrations, and the effect of choosing different TTD and seasonal recharge functions on the mean age estimation. We find that the estimated mean transit time is a function of both the assumed TTD and seasonal weighting function. Best fits as measured by the χ2 value were achieved for the dispersion model using the seasonal input function developed here for two of the three modeled sites, while at the third site, equally good fits were achieved with the exponential model and the dispersion model and our seasonal input function. The average mean transit time for all TTDs and seasonal input functions converged to similar values at each location. The sensitivity of the estimated mean transit time to the seasonal weighting function was equal to that of the TTD. These results indicated that understanding seasonality of recharge is at least as important as the uncertainty in the flow path distribution in fracture networks and that unique identification of the TTD and mean transit time is difficult given the uncertainty in the recharge function. But, the mean transit time appears to be relatively robust to the structural model uncertainty. The results presented here should be applicable to other studies using environmental tracers to constrain flow and transport properties in fractured rock systems.« less
Luxton, Gary; Keall, Paul J; King, Christopher R
2008-01-07
To facilitate the use of biological outcome modeling for treatment planning, an exponential function is introduced as a simpler equivalent to the Lyman formula for calculating normal tissue complication probability (NTCP). The single parameter of the exponential function is chosen to reproduce the Lyman calculation to within approximately 0.3%, and thus enable easy conversion of data contained in empirical fits of Lyman parameters for organs at risk (OARs). Organ parameters for the new formula are given in terms of Lyman model m and TD(50), and conversely m and TD(50) are expressed in terms of the parameters of the new equation. The role of the Lyman volume-effect parameter n is unchanged from its role in the Lyman model. For a non-homogeneously irradiated OAR, an equation relates d(ref), n, v(eff) and the Niemierko equivalent uniform dose (EUD), where d(ref) and v(eff) are the reference dose and effective fractional volume of the Kutcher-Burman reduction algorithm (i.e. the LKB model). It follows in the LKB model that uniform EUD irradiation of an OAR results in the same NTCP as the original non-homogeneous distribution. The NTCP equation is therefore represented as a function of EUD. The inverse equation expresses EUD as a function of NTCP and is used to generate a table of EUD versus normal tissue complication probability for the Emami-Burman parameter fits as well as for OAR parameter sets from more recent data.
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werhahn, Jasper C.; Miliordos, Evangelos; Xantheas, Sotiris S.
2015-01-05
We introduce new generalized (reverting to the original) and extended (not reverting to the original) 4-parameter forms of the (B-2) Potential Energy Function (PEF) of Wang etal. (L.-P. Wang, J. Chen and T. van Voorhis, J. Chem. Theor. Comp. 9, 452 (2013)), which is itself a modification of the Buckingham exponential-6 PEF. The new forms have a tunable, singularity-free short-range repulsion and an adjustable long-range attraction. They produce fits to high quality ab initio data for the X–(H2O), X=F, Cl, Br, I and M+(H2O), M=Li, Na, K, Rb, Cs dimers that are between 1 and 2 orders of magnitude bettermore » than the original 3-parameter (B-2) and modified Buckingham exponential-6 PEFs. They are also slightly better than the 4-parameter generalized Buckingham exponential-6(gBe-6) and of comparable quality with the 4-parameter extended Morse (eM) PEFs introduced recently by us.« less
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
On the origin of non-exponential fluorescence decays in enzyme-ligand complex
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2004-05-01
Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
Vibrational energies for HFCO using a neural network sum of exponentials potential energy surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, Ekadashi; Brown, Alex, E-mail: alex.brown@ualberta.ca
2016-05-07
A six-dimensional potential energy surface (PES) for formyl fluoride (HFCO) is fit in a sum-of-products form using neural network exponential fitting functions. The ab initio data upon which the fit is based were computed at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations [CCSD(T)-F12]/cc-pVTZ-F12 level of theory. The PES fit is accurate (RMSE = 10 cm{sup −1}) up to 10 000 cm{sup −1} above the zero point energy and covers most of the experimentally measured IR data. The PES is validated by computing vibrational energies for both HFCO and deuterated formyl fluoride (DFCO) using block improved relaxationmore » with the multi-configuration time dependent Hartree approach. The frequencies of the fundamental modes, and all other vibrational states up to 5000 cm{sup −1} above the zero-point energy, are more accurate than those obtained from the previous MP2-based PES. The vibrational frequencies obtained on the PES are compared to anharmonic frequencies at the MP2/aug-cc-pVTZ and CCSD(T)/aug-cc-pVTZ levels of theory obtained using second-order vibrational perturbation theory. The new PES will be useful for quantum dynamics simulations for both HFCO and DFCO, e.g., studies of intramolecular vibrational redistribution leading to unimolecular dissociation and its laser control.« less
Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis
NASA Astrophysics Data System (ADS)
Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.
2013-04-01
We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.
Basis convergence of range-separated density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franck, Odile, E-mail: odile.franck@etu.upmc.fr; Mussard, Bastien, E-mail: bastien.mussard@upmc.fr; CNRS, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. Wemore » study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N{sub 2}, and H{sub 2}O) with cardinal number X of the Dunning basis sets cc − p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.« less
Analytical optimization of demand management strategies across all urban water use sectors
NASA Astrophysics Data System (ADS)
Friedman, Kenneth; Heaney, James P.; Morales, Miguel; Palenchar, John
2014-07-01
An effective urban water demand management program can greatly influence both peak and average demand and therefore long-term water supply and infrastructure planning. Although a theoretical framework for evaluating residential indoor demand management has been well established, little has been done to evaluate other water use sectors such as residential irrigation in a compatible manner for integrating these results into an overall solution. This paper presents a systematic procedure to evaluate the optimal blend of single family residential irrigation demand management strategies to achieve a specified goal based on performance functions derived from parcel level tax assessor's data linked to customer level monthly water billing data. This framework is then generalized to apply to any urban water sector, as exponential functions can be fit to all resulting cumulative water savings functions. Two alternative formulations are presented: maximize net benefits, or minimize total costs subject to satisfying a target water savings. Explicit analytical solutions are presented for both formulations based on appropriate exponential best fits of performance functions. A direct result of this solution is the dual variable which represents the marginal cost of water saved at a specified target water savings goal. A case study of 16,303 single family irrigators in Gainesville Regional Utilities utilizing high quality tax assessor and monthly billing data along with parcel level GIS data provide an illustrative example of these techniques. Spatial clustering of targeted homes can be easily performed in GIS to identify priority demand management areas.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Shim, Woo Hyun; Kim, Ho Sung; Choi, Choong-Gon; Kim, Sang Joon
2015-01-01
Brain tumor cellularity has been assessed by using apparent diffusion coefficient (ADC). However, the ADC value might be influenced by both perfusion and true molecular diffusion, and the perfusion effect on ADC can limit the reliability of ADC in the characterization of tumor cellularity, especially, in hypervascular brain tumors. In contrast, the IVIM technique estimates parameter values for diffusion and perfusion effects separately. The purpose of our study was to compare ADC and IVIM for differentiating among glioblastoma, metastatic tumor, and primary CNS lymphoma (PCNSL) focusing on diffusion-related parameter. We retrospectively reviewed the data of 128 patients with pathologically confirmed glioblastoma (n = 55), metastasis (n = 31), and PCNSL (n = 42) prior to any treatment. Two neuroradiologists independently calculated the maximum IVIM-f (fmax) and minimum IVIM-D (Dmin) by using 16 different b-values with a bi-exponential fitting of diffusion signal decay, minimum ADC (ADCmin) by using 0 and 1000 b-values with a mono-exponential fitting and maximum normalized cerebral blood volume (nCBVmax). The differences in fmax, Dmin, nCBVmax, and ADCmin among the three tumor pathologies were determined by one-way ANOVA with multiple comparisons. The fmax and Dmin were correlated to the corresponding nCBV and ADC using partial correlation analysis, respectively. Using a mono-exponential fitting of diffusion signal decay, the mean ADCmin was significantly lower in PCNSL than in glioblastoma and metastasis. However, using a bi-exponential fitting, the mean Dmin did not significantly differ in the three groups. The mean fmax significantly increased in the glioblastomas (reader 1, 0.103; reader 2, 0.109) and the metastasis (reader 1, 0.105; reader 2, 0.107), compared to the primary CNS lymphomas (reader 1, 0.025; reader 2, 0.023) (P < .001 for each). The correlation between fmax and the corresponding nCBV was highest in glioblastoma group, and the correlation between Dmin and the corresponding ADC was highest in primary CNS lymphomas group. Unlike ADC value derived from a mono-exponential fitting of diffusion signal, diffusion-related parametric value derived from a bi-exponential fitting with separation of perfusion effect doesn't differ among glioblastoma, metastasis, and PCNSL.
In vivo chlorine and sodium MRI of rat brain at 21.1 T.
Schepkin, Victor D; Elumalai, Malathy; Kitchen, Jason A; Qian, Chunqi; Gor'kov, Peter L; Brey, William W
2014-02-01
MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. MRI of (35)Cl and (23)Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. T 1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T 1a = 4.8 ms (0.7) T 1b = 24.4 ± 7 ms (0.3) and compared with sodium (T 1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of [Formula: see text] ms and [Formula: see text] ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression.
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
The stationary non-equilibrium plasma of cosmic-ray electrons and positrons
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2016-06-01
The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Individual differences in long-range time representation.
Agostino, Camila S; Caetano, Marcelo S; Balci, Fuat; Claessens, Peter M E; Zana, Yossi
2017-04-01
On the basis of experimental data, long-range time representation has been proposed to follow a highly compressed power function, which has been hypothesized to explain the time inconsistency found in financial discount rate preferences. The aim of this study was to evaluate how well linear and power function models explain empirical data from individual participants tested in different procedural settings. The line paradigm was used in five different procedural variations with 35 adult participants. Data aggregated over the participants showed that fitted linear functions explained more than 98% of the variance in all procedures. A linear regression fit also outperformed a power model fit for the aggregated data. An individual-participant-based analysis showed better fits of a linear model to the data of 14 participants; better fits of a power function with an exponent β > 1 to the data of 12 participants; and better fits of a power function with β < 1 to the data of the remaining nine participants. Of the 35 volunteers, the null hypothesis β = 1 was rejected for 20. The dispersion of the individual β values was approximated well by a normal distribution. These results suggest that, on average, humans perceive long-range time intervals not in a highly compressed, biased manner, but rather in a linear pattern. However, individuals differ considerably in their subjective time scales. This contribution sheds new light on the average and individual psychophysical functions of long-range time representation, and suggests that any attribution of deviation from exponential discount rates in intertemporal choice to the compressed nature of subjective time must entail the characterization of subjective time on an individual-participant basis.
Time-dependent cell disintegration kinetics in lung tumors after irradiation
NASA Astrophysics Data System (ADS)
Chvetsov, Alexei V.; Palta, Jatinder J.; Nagata, Yasushi
2008-05-01
We study the time-dependent disintegration kinetics of tumor cells that did not survive radiotherapy treatment. To evaluate the cell disintegration rate after irradiation, we studied the volume changes of solitary lung tumors after stereotactic radiotherapy. The analysis is performed using two approximations: (1) tumor volume is a linear function of the total cell number in the tumor and (2) the cell disintegration rate is governed by the exponential decay with constant risk, which is defined by the initial cell number and a half-life T1/2. The half-life T1/2 is determined using the least-squares fit to the clinical data on lung tumor size variation with time after stereotactic radiotherapy. We show that the tumor volume variation after stereotactic radiotherapy of solitary lung tumors can be approximated by an exponential function. A small constant component in the volume variation does not change with time; however, this component may be the residual irregular density due to radiation fibrosis and was, therefore, subtracted from the total volume variation in our computations. Using computerized fitting of the exponent function to the clinical data for selected patients, we have determined that the average half-life T1/2 of cell disintegration is 28.2 days for squamous cell carcinoma and 72.4 days for adenocarcinoma. This model is needed for simulating the tumor volume variation during radiotherapy, which may be important for time-dependent treatment planning of proton therapy that is sensitive to density variations.
Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects
2016-01-01
Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris
2015-09-01
A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
Separability of spatiotemporal spectra of image sequences. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Eckert, Michael P.; Buchsbaum, Gershon; Watson, Andrew B.
1992-01-01
The spatiotemporal power spectrum was calculated of 14 image sequences in order to determine the degree to which the spectra are separable in space and time, and to assess the validity of the commonly used exponential correlation model found in the literature. The spectrum was expanded by a Singular Value Decomposition into a sum of separable terms and an index was defined of spatiotemporal separability as the fraction of the signal energy that can be represented by the first (largest) separable term. All spectra were found to be highly separable with an index of separability above 0.98. The power spectra of the sequences were well fit by a separable model. The power spectrum model corresponds to a product of exponential autocorrelation functions separable in space and time.
NASA Astrophysics Data System (ADS)
Pereverzev, Andrey; Sewell, Tommy
2018-03-01
Lattice heat-current time correlation functions for insulators and semiconductors obtained using molecular dynamics (MD) simulations exhibit features of both pure exponential decay and oscillatory-exponential decay. For some materials the oscillatory terms contribute significantly to the lattice heat conductivity calculated from the correlation functions. However, the origin of the oscillatory terms is not well understood, and their contribution to the heat conductivity is accounted for by fitting them to empirical functions. Here, a translationally invariant expression for the heat current in terms of creation and annihilation operators is derived. By using this full phonon-picture definition of the heat current and applying the relaxation-time approximation we explain, at least in part, the origin of the oscillatory terms in the lattice heat-current correlation function. We discuss the relationship between the crystal Hamiltonian and the magnitude of the oscillatory terms. A solvable one-dimensional model is used to illustrate the potential importance of terms that are omitted in the commonly used phonon-picture expression for the heat current. While the derivations are fully quantum mechanical, classical-limit expressions are provided that enable direct contact with classical quantities obtainable from MD.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
In vivo chlorine and sodium MRI of rat brain at 21.1 T
Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.
2017-01-01
Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue
Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.
2004-01-01
The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455
Short-time vibrational dynamics of metaphosphate glasses
NASA Astrophysics Data System (ADS)
Kalampounias, Angelos G.
2012-02-01
In this paper we present the picosecond vibrational dynamics of a series of binary metaphosphate glasses, namely Na2O-P2O5, MO-P2O5 (M=Ba, Sr, Ca, Mg) and Al2O3-3P2O5 by means of Raman spectroscopy. We studied the vibrational dephasing and vibrational frequency modulation by calculating time correlation functions of vibrational relaxation by fits in the frequency domain. The fitting method used enables one to model the real line profiles intermediate between Lorentzian and Gaussian by an analytical function, which has an analytical counterpart in the time domain. The symmetric stretching modes νs(PO2-) and νs(P-O-P) of the PO2- entity of PØ2O2- units and of P-O-P bridges in metaphosphate arrangements have been investigated by Raman spectroscopy and we used them as probes of the dynamics of these glasses. The vibrational time correlation functions of both modes studied are rather adequately interpreted within the assumption of exponential modulation function in the context of Kubo-Rothschield theory and indicate that the system experiences an intermediate dynamical regime that gets only slower with an increase in the ionic radius of the cation-modifier. We found that the vibrational correlation functions of all glasses studied comply with the Rothschild approach assuming that the environmental modulation is described by a stretched exponential decay. The evolution of the dispersion parameter α with increasing ionic radius of the cation indicates the deviation from the model simple liquid indicating the reduction of the coherence decay in the perturbation potential as a result of local short lived aggregates. The results are discussed in the framework of the current phenomenological status of the field.
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-01-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506
Particle yields from numerical simulations
NASA Astrophysics Data System (ADS)
Homor, Marietta M.; Jakovác, Antal
2018-04-01
In this paper we use numerical field theoretical simulations to calculate particle yields. We demonstrate that in the model of local particle creation the deviation from the pure exponential distribution is natural even in equilibrium, and an approximate Tsallis-Pareto-like distribution function can be well fitted to the calculated yields, in accordance with the experimental observations. We present numerical simulations in the classical Φ4 model as well as in the SU(3) quantum Yang-Mills theory to clarify this issue.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
On the Time-Dependent Analysis of Gamow Decay
ERIC Educational Resources Information Center
Durr, Detlef; Grummt, Robert; Kolb, Martin
2011-01-01
Gamow's explanation of the exponential decay law uses complex "eigenvalues" and exponentially growing "eigenfunctions". This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any…
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
NASA Astrophysics Data System (ADS)
Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia
2017-12-01
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
NASA Astrophysics Data System (ADS)
Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto
2018-01-01
Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.
Stellar Surface Brightness Profiles of Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Herrmann, K. A.
2014-03-01
Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, or the light falls off with one exponential out to a break radius and then falls off (II) more steeply (“truncated”), or (III) less steeply (“anti-truncated”). Why there are three different radial profile types is still a mystery, including why light falls off as an exponential at all. Profile breaks are also found in dwarf disks, but some dwarf Type IIs are flat or increasing (FI) out to a break before falling off. I have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2004, 2006). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and Hα from ground-based observations, and 3.6 and 4.5μm from Spitzer. Here I highlight some results from a semi-automatic fitting of this data set including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 40 dwarfs of the LITTLE THINGS subsample.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.
[Age index and an interpretation of survivorship curves (author's transl)].
Lohmann, W
1977-01-01
Clinical investigations showed that the age dependences of physiological functions do not show -- as generally assumed -- a linear increase with age, but an exponential one. Considering this result one can easily interpret the survivorship curve of a population (Gompertz plot). The only thing that is required is that the probability of death (death rate) is proportional to a function of ageing given by mu(t) = mu0 exp (alpha t). Considering survivorship curves resulting from annual death statistics and fitting them by suitable parameters, then the resulting alpha-values are in agreement with clinical data.
Application of a linked stress release model in Corinth Gulf and Central Ionian Islands (Greece)
NASA Astrophysics Data System (ADS)
Mangira, Ourania; Vasiliadis, Georgios; Papadimitriou, Eleftheria
2017-06-01
Spatio-temporal stress changes and interactions between adjacent fault segments consist of the most important component in seismic hazard assessment, as they can alter the occurrence probability of strong earthquake onto these segments. The investigation of the interactions between adjacent areas by means of the linked stress release model is attempted for moderate earthquakes ( M ≥ 5.2) in the Corinth Gulf and the Central Ionian Islands (Greece). The study areas were divided in two subareas, based on seismotectonic criteria. The seismicity of each subarea is investigated by means of a stochastic point process and its behavior is determined by the conditional intensity function, which usually gets an exponential form. A conditional intensity function of Weibull form is used for identifying the most appropriate among the models (simple, independent and linked stress release model) for the interpretation of the earthquake generation process. The appropriateness of the models was decided after evaluation via the Akaike information criterion. Despite the fact that the curves of the conditional intensity functions exhibit similar behavior, the use of the exponential-type conditional intensity function seems to fit better the data.
A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region
NASA Technical Reports Server (NTRS)
Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.
2016-01-01
The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.
Statistics of Optical Coherence Tomography Data From Human Retina
de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo
2010-01-01
Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
Optical coherence tomography assessment of vessel wall degradation in aneurysmatic thoracic aortas
NASA Astrophysics Data System (ADS)
Real, Eusebio; Eguizabal, Alma; Pontón, Alejandro; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José; Conde, Olga M.
2013-06-01
Optical coherence tomographic images of ascending thoracic human aortas from aneurysms exhibit disorders on the smooth muscle cell structure of the media layer of the aortic vessel as well as elastin degradation. Ex-vivo measurements of human samples provide results that correlate with pathologist diagnosis in aneurysmatic and control aortas. The observed disorders are studied as possible hallmarks for aneurysm diagnosis. To this end, the backscattering profile along the vessel thickness has been evaluated by fitting its decay against two different models, a third order polynomial fitting and an exponential fitting. The discontinuities present on the vessel wall on aneurysmatic aortas are slightly better identified with the exponential approach. Aneurysmatic aortic walls present uneven reflectivity decay when compared with healthy vessels. The fitting error has revealed as the most favorable indicator for aneurysm diagnosis as it provides a measure of how uniform is the decay along the vessel thickness.
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
Time-resolved photoluminescence in Mobil Composition of Matter-48
NASA Astrophysics Data System (ADS)
Liu, Y. L.; Lee, W. Z.; Shen, J. L.; Lee, Y. C.; Cheng, P. W.; Cheng, C. F.
2004-12-01
Dynamical properties of Mobil Composition of Matter (MCM)-48 were studied by time-resolved photoluminescence (PL). The PL intensity exhibits a clear nonexponential profile, which can be fitted by a stretched exponential function. In the temperature range from 50to300K, the PL decay lifetime becomes thermally activated by a characteristic energy of 25meV, which is suggested to be an indication of the phonon-assisted nonradiative process. A model is proposed to explain the relaxation behavior of the PL in MCM-48.
Regression of altitude-produced cardiac hypertrophy.
NASA Technical Reports Server (NTRS)
Sizemore, D. A.; Mcintyre, T. W.; Van Liere, E. J.; Wilson , M. F.
1973-01-01
The rate of regression of cardiac hypertrophy with time has been determined in adult male albino rats. The hypertrophy was induced by intermittent exposure to simulated high altitude. The percentage hypertrophy was much greater (46%) in the right ventricle than in the left (16%). The regression could be adequately fitted to a single exponential function with a half-time of 6.73 plus or minus 0.71 days (90% CI). There was no significant difference in the rates of regression for the two ventricles.
Cross-Conjugated Nanoarchitectures
2013-08-23
compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
NASA Astrophysics Data System (ADS)
Weiss, J. R.; Saunders, A.; Qiu, Q.; Foster, J. H.; Gomez, D.; Bevis, M. G.; Smalley, R., Jr.; Cimbaro, S.; Lenzano, L. E.; Barón, J.; Baez, J. C.; Echalar, A.; Avery, J.; Wright, T. J.
2017-12-01
We use a large regional network of continuous GPS sites to investigate postseismic deformation following the Mw 8.8 Maule and Mw 8.1 Pisagua earthquakes in Chile. Geodetic observations of surface displacements associated with megathrust earthquakes aid our understanding of the subduction zone earthquake cycle including postseismic processes such as afterslip and viscoelastic relaxation. The observations also help place constraints on the rheology and structure of the crust and upper mantle. We first empirically model the data and find that, while single-term logarithmic functions adequately fit the postseismic timeseries, they do a poor job of characterizing the rapid displacements in the days to weeks following the earthquakes. Combined exponential-logarithmic functions better capture the inferred near-field transition between afterslip and viscous relaxation, however displacements are best fit by three-term exponential functions with characteristic decay times of 15, 250, and 1500 days. Viscoelastic modeling of the velocity field and timeseries following the Maule earthquake suggests that the rheology is complex but is consistent with a 100-km-thick asthenosphere channel of viscosity 1018 Pa s sandwiched between a 40-km-thick elastic lid and a strong viscoelastic upper mantle. Variations in lid thickness of up to 40 km may be present and in some locations rapid deformation within the first months to years following the Maule event requires an even lower effective viscosity or a significant contribution from afterslip. We investigate this further by jointly inverting the GPS data for the time evolution of afterslip and viscous flow in the mantle wedge surrounding the Maule event.
An exponential scaling law for the strain dependence of the Nb3Sn critical current density
NASA Astrophysics Data System (ADS)
Bordini, B.; Alknes, P.; Bottura, L.; Rossi, L.; Valentinis, D.
2013-07-01
The critical current density of the Nb3Sn superconductor is strongly dependent on the strain applied to the material. In order to investigate this dependence, it is a common practice to measure the critical current of Nb3Sn strands for different values of applied axial strain. In the literature, several models have been proposed to describe these experimental data in the reversible strain region. All these models are capable of fitting the measurement results in the strain region where data are collected, but tend to predict unphysical trends outside the range of data, and especially for large strain values. In this paper we present a model of a new strain function, together with the results obtained by applying the new scaling law on relevant datasets. The data analyzed consisted of the critical current measurements at 4.2 K that were carried out under applied axial strain at Durham University and the University of Geneva on different strand types. With respect to the previous models proposed, the new scaling function does not present problems at large strain values, has a lower number of fitting parameters (only two instead of three or four), and is very stable, so that, starting from few experimental points, it can estimate quite accurately the strand behavior in a strain region where there are no data. A relationship is shown between the proposed strain function and the elastic strain energy, and an analogy is drawn with the exponential form of the McMillan equation for the critical temperature.
A Simple Mechanical Experiment on Exponential Growth
ERIC Educational Resources Information Center
McGrew, Ralph
2015-01-01
With a rod, cord, pulleys, and slotted masses, students can observe and graph exponential growth in the cord tension over a factor of increase as large as several hundred. This experiment is adaptable for use either in algebra-based or calculus-based physics courses, fitting naturally with the study of sliding friction. Significant parts of the…
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Phonation threshold pressure across the pitch range: preliminary test of a model.
Solomon, Nancy Pearl; Ramanathan, Pradeep; Makashay, Matthew J
2007-09-01
This study sought to examine the specific relationship between phonation threshold pressure (PTP) and voice fundamental frequency (F(0)) across the pitch range. A published theoretical model of this relationship described a quadratic equation, with PTP increasing exponentially with F(0). Prospective data from eight adults with normal, untrained voices were collected. Subjects produced their quietest phonation at 10 randomly ordered pitches from 5% to 95% of their semitone pitch range at 10% intervals. Analysis included curve fitting for individual and group data, as well as comparisons to the previous model. The group data fit a quadratic function similar to that proposed previously, but the specific quadratic coefficient and constant values differed. Four of the individual subjects' data were best fit by quartic functions, two by quadratic functions, and one by a linear function. This preliminary study indicates that PTP may be minimal at a "comfortable" pitch rather than the lowest pitch tested, and that, for some individuals, PTP may be slightly elevated during the passaggio between modal and falsetto vocal registers. These data support the general form of the theoretical PTP-F(0) function for these speakers, and indicate the possibility of potential refinements to the model. Future studies with larger groups of male and female subjects across a wider age range may eventually reveal the specific nature of the function.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
Corzett, Christopher H; Goodman, Myron F; Finkel, Steven E
2013-06-01
Escherichia coli DNA polymerases (Pol) II, IV, and V serve dual roles by facilitating efficient translesion DNA synthesis while simultaneously introducing genetic variation that can promote adaptive evolution. Here we show that these alternative polymerases are induced as cells transition from exponential to long-term stationary-phase growth in the absence of induction of the SOS regulon by external agents that damage DNA. By monitoring the relative fitness of isogenic mutant strains expressing only one alternative polymerase over time, spanning hours to weeks, we establish distinct growth phase-dependent hierarchies of polymerase mutant strain competitiveness. Pol II confers a significant physiological advantage by facilitating efficient replication and creating genetic diversity during periods of rapid growth. Pol IV and Pol V make the largest contributions to evolutionary fitness during long-term stationary phase. Consistent with their roles providing both a physiological and an adaptive advantage during stationary phase, the expression patterns of all three SOS polymerases change during the transition from log phase to long-term stationary phase. Compared to the alternative polymerases, Pol III transcription dominates during mid-exponential phase; however, its abundance decreases to <20% during long-term stationary phase. Pol IV transcription dominates as cells transition out of exponential phase into stationary phase and a burst of Pol V transcription is observed as cells transition from death phase to long-term stationary phase. These changes in alternative DNA polymerase transcription occur in the absence of SOS induction by exogenous agents and indicate that cell populations require appropriate expression of all three alternative DNA polymerases during exponential, stationary, and long-term stationary phases to attain optimal fitness and undergo adaptive evolution.
Channel response to sediment release: insights from a paired analysis of dam removal
Collins, Mathias J.; Snyder, Noah P.; Boardman, Graham; Banks, William S.; Andrews, Mary; Baker, Matthew E.; Conlon, Maricate; Gellis, Allen; McClain, Serena; Miller, Andrew; Wilcock, Peter
2017-01-01
Dam removals with unmanaged sediment releases are good opportunities to learn about channel response to abruptly increased bed material supply. Understanding these events is important because they affect aquatic habitats and human uses of floodplains. A longstanding paradigm in geomorphology holds that response rates to landscape disturbance exponentially decay through time. However, a previous study of the Merrimack Village Dam (MVD) removal on the Souhegan River in New Hampshire, USA, showed that an exponential function poorly described the early geomorphic response. Erosion of impounded sediments there was two-phased. We had an opportunity to quantitatively test the two-phase response model proposed for MVD by extending the record there and comparing it with data from the Simkins Dam removal on the Patapsco River in Maryland, USA. The watershed sizes are the same order of magnitude (102 km2), and at both sites low-head dams were removed (~3–4 m) and ~65 000 m3 of sand-sized sediments were discharged to low-gradient reaches. Analyzing four years of repeat morphometry and sediment surveys at the Simkins site, as well as continuous discharge and turbidity data, we observed the two-phase erosion response described for MVD. In the early phase, approximately 50% of the impounded sediment at Simkins was eroded rapidly during modest flows. After incision to base level and widening, a second phase began when further erosion depended on floods large enough to go over bank and access impounded sediments more distant from the newly-formed channel. Fitting functional forms to the data for both sites, we found that two-phase exponential models with changing decay constants fit the erosion data better than single-phase models. Valley width influences the two-phase erosion responses upstream, but downstream responses appear more closely related to local gradient, sediment re-supply from the upstream impoundments, and base flows.
The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements
NASA Astrophysics Data System (ADS)
Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.
2016-12-01
Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Bælum, Jacob; Prestat, Emmanuel; David, Maude M.; Strobel, Bjarne W.
2012-01-01
Mineralization potentials, rates, and kinetics of the three phenoxy acid (PA) herbicides, 2,4-dichlorophenoxyacetic acid (2,4-D), 4-chloro-2-methylphenoxyacetic acid (MCPA), and 2-(4-chloro-2-methylphenoxy)propanoic acid (MCPP), were investigated and compared in 15 soils collected from five continents. The mineralization patterns were fitted by zero/linear or exponential growth forms of the three-half-order models and by logarithmic (log), first-order, or zero-order kinetic models. Prior and subsequent to the mineralization event, tfdA genes were quantified using real-time PCR to estimate the genetic potential for degrading PA in the soils. In 25 of the 45 mineralization scenarios, ∼60% mineralization was observed within 118 days. Elevated concentrations of tfdA in the range 1 × 105 to 5 × 107 gene copies g−1 of soil were observed in soils where mineralization could be described by using growth-linked kinetic models. A clear trend was observed that the mineralization rates of the three PAs occurred in the order 2,4-D > MCPA > MCPP, and a correlation was observed between rapid mineralization and soils exposed to PA previously. Finally, for 2,4-D mineralization, all seven mineralization patterns which were best fitted by the exponential model yielded a higher tfdA gene potential after mineralization had occurred than the three mineralization patterns best fitted by the Lin model. PMID:22635998
The resolved star formation history of M51a through successive Bayesian marginalization
NASA Astrophysics Data System (ADS)
Martínez-García, Eric E.; Bruzual, Gustavo; Magris C., Gladis; González-Lópezlira, Rosa A.
2018-02-01
We have obtained the time and space-resolved star formation history (SFH) of M51a (NGC 5194) by fitting Galaxy Evolution Explorer (GALEX), Sloan Digital Sky Survey and near-infrared pixel-by-pixel photometry to a comprehensive library of stellar population synthesis models drawn from the Synthetic Spectral Atlas of Galaxies (SSAG). We fit for each space-resolved element (pixel) an independent model where the SFH is averaged in 137 age bins, each one 100 Myr wide. We used the Bayesian Successive Priors (BSP) algorithm to mitigate the bias in the present-day spatial mass distribution. We test BSP with different prior probability distribution functions (PDFs); this exercise suggests that the best prior PDF is the one concordant with the spatial distribution of the stellar mass as inferred from the near-infrared images. We also demonstrate that varying the implicit prior PDF of the SFH in SSAG does not affect the results. By summing the contributions to the global star formation rate of each pixel, at each age bin, we have assembled the resolved SFH of the whole galaxy. According to these results, the star formation rate of M51a was exponentially increasing for the first 10 Gyr after the big bang, and then turned into an exponentially decreasing function until the present day. Superimposed, we find a main burst of star formation at t ≈ 11.9 Gyr after the big bang.
NASA Astrophysics Data System (ADS)
Medich, David Christopher
1997-09-01
The biokinetics of Iodophenylpentadecanoic acid (123I-IPPA) during a chronic period of myocardial infarction were determined and compared to 201Tl. IPPA was assessed as a perfusion and metabolic tracer in the scintigraphic diagnosis of coronary artery disease. The myocardial clearance kinetics were measured by placing a series of thermoluminescent dosimeters (TLDs) on normal and infarcted tissue to measure the local myocardial activity content over time. The arterial blood pool activity was fit to a bi-exponential function for 201Tl and a tri-exponential function for 123I-IPPA to estimate the left ventricle contribution to TLD response. At equilibrium, the blood pool contribution was estimated experimentally to be less than 5% of the total TLD response. The method was unable to resolve the initial uptake of the imaging agent due in part to the 2 minute TLD response integration time and in part to the 30 second lag time for the first TLD placement. A noticeable disparity was observed between the tracer concentrations of IPPA in normal and ischemic tissue of approximately 2:1. The fitting parameters (representing the biokinetic eigenvalue rate constants) were related to the fundamental rate constants of a recycling biokinetic model. The myocardial IPPA content within normal tissue was elevated after approximately 130 minutes post injection. This phenomenon was observed in all but one (950215) of the IPPA TLD kinetics curves.
Stellar Surface Brightness Profiles of Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Herrmann, Kimberly A.; LITTLE THINGS Team
2012-01-01
Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, (II) truncated: the light falls off with one exponential out to a break radius and then falls off more steeply, and (III) anti-truncated: the light falls off with one exponential out to a break radius and then falls off less steeply. Stellar surface brightness profile breaks are also found in dwarf disk galaxies, but with an additional category: (FI) flat-inside: the light is roughly constant or increasing and then falls off beyond a break. We have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2006, 2004). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and H-alpha from ground-based observations, and 3.6 and 4.5 microns from Spitzer. In this talk, I will highlight results from a semi-automatic fitting of this data set, including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 41 dwarfs of the LITTLE THINGS subsample. We gratefully acknowledge funding for this research from the National Science Foundation (AST-0707563).
Locomotor function after long-duration space flight: effects and motor learning during recovery.
Mulavara, Ajitkumar P; Feiveson, Alan H; Fiedler, James; Cohen, Helen; Peters, Brian T; Miller, Chris; Brady, Rachel; Bloomberg, Jacob J
2010-05-01
Astronauts returning from space flight and performing Earth-bound activities must rapidly transition from the microgravity-adapted sensorimotor state to that of Earth's gravity. The goal of the current study was to assess locomotor dysfunction and recovery of function after long-duration space flight using a test of functional mobility. Eighteen International Space Station crewmembers experiencing an average flight duration of 185 days performed the functional mobility test (FMT) pre-flight and post-flight. To perform the FMT, subjects walked at a self selected pace through an obstacle course consisting of several pylons and obstacles set up on a base of 10-cm-thick, medium-density foam for a total of six trials per test session. The primary outcome measure was the time to complete the course (TCC, in seconds). To assess the long-term recovery trend of locomotor function after return from space flight, a multilevel exponential recovery model was fitted to the log-transformed TCC data. All crewmembers exhibited altered locomotor function after space flight, with a median 48% increase in the TCC. From the fitted model we calculated that a typical subject would recover to 95% of his/her pre-flight level at approximately 15 days post-flight. In addition, to assess the early motor learning responses after returning from space flight, we modeled performance over the six trials during the first post-flight session by a similar multilevel exponential relation. We found a significant positive correlation between measures of long-term recovery and early motor learning (P < 0.001) obtained from the respective models. We concluded that two types of recovery processes influence an astronaut's ability to re-adapt to Earth's gravity environment. Early motor learning helps astronauts make rapid modifications in their motor control strategies during the first hours after landing. Further, this early motor learning appears to reinforce the adaptive realignment, facilitating re-adaptation to Earth's 1-g environment on return from space flight.
VizieR Online Data Catalog: Vela Junior (RX J0852.0-4622) HESS image (HESS+, 2018)
NASA Astrophysics Data System (ADS)
H. E. S. S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Andersson, T.; Anguener, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernloehr, K.; Blackwell, R.; Boettcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Buechele, M.; Bulik, T.; Capasso, M.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Chretien, M.; Coffaro, M.; Colafrancesco, S.; Cologna, G.; Condon, B.; Conrad, J.; Cui, Y.; Davids, I. D.; Decock, J.; Degrange, B.; Deil, C.; Devin, J.; Dewilt, P.; Dirson, L.; Djannati-Atai, A.; Domainko, W.; Donath, A.; Drury, L. O'c.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Foerster, A.; Funk, S.; Fuessling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jogler, T.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzynski, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khelifi, B.; Kieffer, M.; King, J.; Klepser, S.; Klochkov, D.; Kluzniak, W.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Kraus, M.; Krueger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lefranc, V.; Lemiere, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; Lopez-Coto, R.; Lypova, I.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Mora, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Niederwanger, F.; Niemiec J.; Oakes, L.; O'Brien, P.; Odaka, H.; Oettl, S.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poon, H.; Prokhorov, D.; Prokoph, H.; Puehlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reimer, A.; Reimer, O.; Renaud, M.; de Los Reyes, R.; Richter, S.; Rieger, F.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schuessler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, L.; Steenkamp, R.; Stegmann, C.; Stycz, K.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der, Walt D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Voelk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Woernlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Zywucka, N.
2018-03-01
skymap.fit: H.E.S.S. excess skymap in FITS format of the region comprising Vela Junior and its surroundings. The excess map has been corrected for the gradient of exposure and smoothed with a Gaussian function of width 0.08° to match the analysis point spread function, matching the procedure applied to derive the maps in Fig. 1. sp_stat.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent statistical uncertainties at 1 sigma confidence level. The covariance matrix of the fit is also included in the format: c11 c12 c_13 c21 c22 c_23 c31 c32 c_33 where the subindices represent the following parameters of the power-law with exponential cut-off (ECPL) formula in Tab. 2: 1: flux normalization (Phi0) 2: spectral index (Gamma) 3: inverse of the cutoff energy (lambda=1/Ecut) The units for the covariance matrix are the same as for the fit parameters. Notice that, while the fit parameters section of the file shows E_cut as parameter, the fit was done in lambda=1/Ecut; hence the covariance matrix shows the values for lambda in TeV-1. sp_syst.txt: H.E.S.S. spectral points and fit parameters for Vela Junior (H.E.S.S. data points in Fig. 3 and Tab. A.2 and H.E.S.S. spectral fit parameters in Tab. 4). The errors in this file represent systematic uncertainties at 1 sigma confidence level. The integral fluxes for several energy ranges are also included. (4 data files).
Shot model parameters for Cygnus X-1 through phase portrait fitting
NASA Technical Reports Server (NTRS)
Lochner, James C.; Swank, J. H.; Szymkowiak, A. E.
1991-01-01
Shot models for systems having about 1/f power density spectrum are developed by utilizing a distribution of shot durations. Parameters of the distribution are determined by fitting the power spectrum either with analytic forms for the spectrum of a shot model with a given shot profile, or with the spectrum derived from numerical realizations of trial shot models. The shot fraction is specified by fitting the phase portrait, which is a plot of intensity at a given time versus intensity at a delayed time and in principle is sensitive to different shot profiles. These techniques have been extensively applied to the X-ray variability of Cygnus X-1, using HEAO 1 A-2 and an Exosat ME observation. The power spectra suggest models having characteristic shot durations lasting from milliseconds to a few seconds, while the phase portrait fits give shot fractions of about 50 percent. Best fits to the portraits are obtained if the amplitude of the shot is a power-law function of the duration of the shot. These fits prefer shots having a symmetric exponential rise and decay. Results are interpreted in terms of a distribution of magnetic flares in the accretion disk.
A Relevance Vector Machine-Based Approach with Application to Oil Sand Pump Prognostics
Hu, Jinfei; Tse, Peter W.
2013-01-01
Oil sand pumps are widely used in the mining industry for the delivery of mixtures of abrasive solids and liquids. Because they operate under highly adverse conditions, these pumps usually experience significant wear. Consequently, equipment owners are quite often forced to invest substantially in system maintenance to avoid unscheduled downtime. In this study, an approach combining relevance vector machines (RVMs) with a sum of two exponential functions was developed to predict the remaining useful life (RUL) of field pump impellers. To handle field vibration data, a novel feature extracting process was proposed to arrive at a feature varying with the development of damage in the pump impellers. A case study involving two field datasets demonstrated the effectiveness of the developed method. Compared with standalone exponential fitting, the proposed RVM-based model was much better able to predict the remaining useful life of pump impellers. PMID:24051527
Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won
2011-04-01
3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
A relevance vector machine-based approach with application to oil sand pump prognostics.
Hu, Jinfei; Tse, Peter W
2013-09-18
Oil sand pumps are widely used in the mining industry for the delivery of mixtures of abrasive solids and liquids. Because they operate under highly adverse conditions, these pumps usually experience significant wear. Consequently, equipment owners are quite often forced to invest substantially in system maintenance to avoid unscheduled downtime. In this study, an approach combining relevance vector machines (RVMs) with a sum of two exponential functions was developed to predict the remaining useful life (RUL) of field pump impellers. To handle field vibration data, a novel feature extracting process was proposed to arrive at a feature varying with the development of damage in the pump impellers. A case study involving two field datasets demonstrated the effectiveness of the developed method. Compared with standalone exponential fitting, the proposed RVM-based model was much better able to predict the remaining useful life of pump impellers.
NASA Astrophysics Data System (ADS)
Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.
Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.
Apparent power-law distributions in animal movements can arise from intraspecific interactions
Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.
2015-01-01
Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992
Human mobility in space from three modes of public transportation
NASA Astrophysics Data System (ADS)
Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu
2017-10-01
The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
Measurements of exciton diffusion by degenerate four-wave mixing in CdS1-xSex
NASA Astrophysics Data System (ADS)
Schwab, H.; Pantke, K.-H.; Hvam, J. M.; Klingshirn, C.
1992-09-01
We performed transient-grating experiments to study the diffusion of excitons in CdS1-xSex mixed crystals. The decay of the initially created exciton density grating is well described for t<=1 ns by a stretched-exponential function. For later times this decay changes over to a behavior that is well fitted by a simple exponential function. During resonant excitation of the localized states, we find the diffusion coefficient (D) to be considerably smaller than in the binary compounds CdSe and CdS. At 4.2 K, D is below our experimental resolution which is about 0.025 cm2/s. With increasing lattice temperature (Tlattice) the diffusion coefficient increases. It was therefore possible to prove, in a diffusion experiment, that at Tlattice<=5 K the excitons are localized, while the exciton-phonon interaction leads to a delocalization and thus to the onset of diffusion. It was possible to deduce the diffusion coefficient of the extended excitons as well as the energetic position of the mobility edge.
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2005-08-01
Decays of tyrosine fluorescence in protein-ligand complexes are described by a model of continuous distribution of fluorescence lifetimes. Resulted analytical power-like decay function provides good fits to highly complex fluorescence kinetics. Moreover, this is a manifestation of so-called Tsallis q-exponential function, which is suitable for description of the systems with long-range interactions, memory effect, as well as with fluctuations of the characteristic lifetime of fluorescence. The proposed decay functions were applied to analysis of fluorescence decays of tyrosine in a protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli (the product of the deoD gene), free in aqueous solution and in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate). The power-like function provides new information about enzyme-ligand complex formation based on the physically justified heterogeneity parameter directly related to the lifetime distribution. A measure of the heterogeneity parameter in the enzyme systems is provided by a variance of fluorescence lifetime distribution. The possible number of deactivation channels and excited state mean lifetime can be easily derived without a priori knowledge of the complexity of studied system. Moreover, proposed model is simpler then traditional multi-exponential one, and better describes heterogeneous nature of studied systems.
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier
2016-01-01
Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
NASA Technical Reports Server (NTRS)
Sibonga, J. D.; Evans, H. J.; Sung, H. G.; Spector, E. R.; Lang, T. F.; Oganov, V. S.; Bakulin, A. V.; Shackelford, L. C.; LeBlanc, A. D.
2007-01-01
The loss of bone mineral in NASA astronauts during spaceflight has been investigated throughout the more than 40 years of space travel. Consequently, it is a medical requirement at NASA Johnson Space Center (JSC) that changes in bone mass be monitored in crew members by measuring bone mineral density (BMD) with dual-energy x-ray absorptiometry (DXA) before and after flight on astronauts who serve on long-duration missions (4-6 months). We evaluated this repository of medical data to track whether there is recovery of bone mineral that was lost during spaceflight. Our analysis was supplemented by BMD data from cosmonauts ( by convention, a space traveler formally employed by the Russia Aviation and Space Agency or by the previous Soviet Union) who had also flown on long-duration missions. Data from a total of 45 individual crew members -- a small number of whom flew on more than one mission -- were used in this analysis. Changes in BMD (between 56 different sets of pre- and postflight measurements) were plotted as a function of time (days after landing). Plotted BMD changes were fitted to an exponential mathematical function that estimated: i) BMD change on landing day (day 0) and ii) the number of days after landing when 50% of the lost bone would be recovered ("50% recovery time") in the lumbar spine, trochanter, pelvis, femoral neck and calcaneus. In sum, averaged losses of bone mineral after long-duration spaceflight ranged between 2-9% across all sites with our recovery model predicting a 50% restoration of bone loss for all sites to be within 9 months.
NASA Astrophysics Data System (ADS)
Varotsos, Costas A.; Efstathiou, Maria N.
2017-05-01
A substantial weakness of several climate studies on long-range dependence is the conclusion of long-term memory of the climate conditions, without considering it necessary to establish the power-law scaling and to reject a simple exponential decay of the autocorrelation function. We herewith show one paradigmatic case, where a strong long-range dependence could be wrongly inferred from incomplete data analysis. We firstly apply the DFA method on the solar and volcanic forcing time series over the tropical Pacific, during the past 1000 years and the results obtained show that a statistically significant straight line fit to the fluctuation function in a log-log representation is revealed with slope higher than 0.5, which wrongly may be assumed as an indication of persistent long-range correlations in the time series. We argue that the long-range dependence cannot be concluded just from this straight line fit, but it requires the fulfilment of the two additional prerequisites i.e. reject the exponential decay of the autocorrelation function and establish the power-law scaling. In fact, the investigation of the validity of these prerequisites showed that the DFA exponent higher than 0.5 does not justify the existence of persistent long-range correlations in the temporal evolution of the solar and volcanic forcing during last millennium. In other words, we show that empirical analyses, based on these two prerequisites must not be considered as panacea for a direct proof of scaling, but only as evidence that the scaling hypothesis is plausible. We also discuss the scaling behaviour of solar and volcanic forcing data based on the Haar tool, which recently proved its ability to reliably detect the existence of the scaling effect in climate series.
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Gutiérrez-Juárez, G; Vargas-Luna, M; Córdova, T; Varela, J B; Bernal-Alvarado, J J; Sosa, M
2002-08-01
A photoacoustic technique is used for studying topically applied substance absorption in human skin. The proposed method utilizes a double-chamber PA cell. The absorption determination was obtained through the measurement of the thermal effusivity of the binary system substance-skin. The theoretical model assumes that the effective thermal effusivity of the binary system corresponds to that of a two-phase system. Experimental applications of the method employed different substances of topical application in different parts of the body of a volunteer. The method is demonstrated to be an easily used non-invasive technique for dermatology research. The relative concentrations as a function of time of substances such as ketoconazol and sunscreen were determined by fitting a sigmoidal function to the data, while an exponential function corresponds to the best fit for the set of data for nitrofurazona, vaseline and vaporub. The time constants associated with the rates of absorption, were found to vary in the range between 10 and 58 min, depending on the substance and the part of the body.
GLASS VISCOSITY AS A FUNCTION OF TEMPERATURE AND COMPOSITION: A MODEL BASED ON ADAM-GIBBS EQUATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hrma, Pavel R.
2008-07-01
Within the temperature range and composition region of processing and product forming, the viscosity of commercial and waste glasses spans over 12 orders of magnitude. This paper shows that a generalized Adam-Gibbs relationship reasonably approximates the real behavior of glasses with four temperature-independent parameters of which two are linear functions of the composition vector. The equation is subjected to two constraints, one requiring that the viscosity-temperature relationship approaches the Arrhenius function at high temperatures with a composition-independent pre-exponential factor and the other that the viscosity value is independent of composition at the glass-transition temperature. Several sets of constant coefficients weremore » obtained by fitting the generalized Adam-Gibbs equation to data of two glass families: float glass and Hanford waste glass. Other equations (the Vogel-Fulcher-Tammann equation, original and modified, the Avramov equation, and the Douglass-Doremus equation) were fitted to float glass data series and compared with the Adam-Gibbs equation, showing that Adam-Gibbs glass appears an excellent approximation of real glasses even as compared with other candidate constitutive relations.« less
Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain
2012-01-01
Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053
Non-Gaussian behavior in jamming / unjamming transition in dense granular materials
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.
2013-06-01
Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.
Lindley frailty model for a class of compound Poisson processes
NASA Astrophysics Data System (ADS)
Kadilar, Gamze Özel; Ata, Nihal
2013-10-01
The Lindley distribution gain importance in survival analysis for the similarity of exponential distribution and allowance for the different shapes of hazard function. Frailty models provide an alternative to proportional hazards model where misspecified or omitted covariates are described by an unobservable random variable. Despite of the distribution of the frailty is generally assumed to be continuous, it is appropriate to consider discrete frailty distributions In some circumstances. In this paper, frailty models with discrete compound Poisson process for the Lindley distributed failure time are introduced. Survival functions are derived and maximum likelihood estimation procedures for the parameters are studied. Then, the fit of the models to the earthquake data set of Turkey are examined.
NASA Astrophysics Data System (ADS)
Pal, P.; Ghosh, A.
2018-04-01
Dielectric spectroscopy covering the frequency range 0.01 Hz - 2 MHz for PMMA-LiClO4 based polymer electrolyte embedded with different concentration of ethylene carbonate (x = 0, 20 and 40 wt%) has been analyzed using Havrilliak-Negami formalism. The reciprocal temperature dependence of inverse relaxation time obtained from the analysis of dielectric spectra follows Vogel-Tammann-Fulcher behaviour. The shape parameters obtained from this analysis change with ethylene carbonate concentrations. From the fits of the experimental result using Kohlrausch-Williams-Watts function. We have obtained stretched exponent β which indicates that the relaxation is highly non-exponential. The decay function obtained from electric modulus data is highly asymmetric.
Response effort discounts the subjective value of rewards.
Nishiyama, Ryoji
2014-09-01
Factors associated with obtaining a reward, such as a temporal delay in receiving the reward, can influence the subjective value of the reward. Cognitive as well as physical response effort is also known to influence choice behaviors. The present study used hypothetical situations to assess whether response effort affects the subjective value of rewards. The results demonstrated that increasing response effort increases the amount of money that participants are willing to forgo to avoid engaging in work. An exponential as well as hyperbolic function provided a good fit for such discounting. The findings suggest that response effort discounts the subjective value of a reward as a function of its amount. Copyright © 2014 Elsevier B.V. All rights reserved.
Extracting the exponential behaviors in the market data
NASA Astrophysics Data System (ADS)
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2007-08-01
We introduce a mathematical criterion defining the bubbles or the crashes in financial market price fluctuations by considering exponential fitting of the given data. By applying this criterion we can automatically extract the periods in which bubbles and crashes are identified. From stock market data of so-called the Internet bubbles it is found that the characteristic length of bubble period is about 100 days.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
NASA Astrophysics Data System (ADS)
Ackerman, T. R.; Pizzuto, J. E.
2016-12-01
Sediment may be stored briefly or for long periods in alluvial deposits adjacent to rivers. The duration of sediment storage may affect diagenesis, and controls the timing of sediment delivery, affecting the propagation of upland sediment signals caused by tectonics, climate change, and land use, and the efficacy of watershed management strategies designed to reduce sediment loading to estuaries and reservoirs. Understanding the functional form of storage time distributions can help to extrapolate from limited field observations and improve forecasts of sediment loading. We simulate stratigraphy adjacent to a modeled river where meander migration is driven by channel curvature. The basal unit is built immediately as the channel migrates away, analogous to a point bar; rules for overbank (flood) deposition create thicker deposits at low elevations and near the channel, forming topographic features analogous to natural levees, scroll bars, and terraces. Deposit age is tracked everywhere throughout the simulation, and the storage time is recorded when the channel returns and erodes the sediment at each pixel. 210 ky of simulated run time is sufficient for the channel to migrate 10,500 channel widths, but only the final 90 ky are analyzed. Storage time survivor functions are well fit by exponential functions until 500 years (point bar) or 600 years (overbank) representing the youngest 50% of eroded sediment. Then (until an age of 12 ky, representing the next 48% (point bar) or 45% (overbank) of eroding sediment), the distributions are well fit by heavy tailed power functions with slopes of -1 (point bar) and -0.75 (overbank). After 12 ky (6% of model run time) the remainder of the storage time distributions become exponential (light tailed). Point bar sediment has the greatest chance (6%) of eroding at 120 years, as the river reworks recently deposited point bars. Overbank sediment has an 8% chance of eroding after 1 time step, a chance that declines by half after 3 time steps. The high probability of eroding young overbank deposits occurs as the river reworks recently formed natural levees. These results show that depositional environment affects river floodplain storage times shorter than a few centuries, and suggest that a power law distribution with a truncated tail may be the most reasonable functional fit.
OMFIT Tokamak Profile Data Fitting and Physics Analysis
Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...
2018-01-22
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
OMFIT Tokamak Profile Data Fitting and Physics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, N. C.; Grierson, B. A.; Haskey, S. R.
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
Extending unified-theory-of-reinforcement neural networks to steady-state operant behavior.
Calvin, Olivia L; McDowell, J J
2016-06-01
The unified theory of reinforcement has been used to develop models of behavior over the last 20 years (Donahoe et al., 1993). Previous research has focused on the theory's concordance with the respondent behavior of humans and animals. In this experiment, neural networks were developed from the theory to extend the unified theory of reinforcement to operant behavior on single-alternative variable-interval schedules. This area of operant research was selected because previously developed neural networks could be applied to it without significant alteration. Previous research with humans and animals indicates that the pattern of their steady-state behavior is hyperbolic when plotted against the obtained rate of reinforcement (Herrnstein, 1970). A genetic algorithm was used in the first part of the experiment to determine parameter values for the neural networks, because values that were used in previous research did not result in a hyperbolic pattern of behavior. After finding these parameters, hyperbolic and other similar functions were fitted to the behavior produced by the neural networks. The form of the neural network's behavior was best described by an exponentiated hyperbola (McDowell, 1986; McLean and White, 1983; Wearden, 1981), which was derived from the generalized matching law (Baum, 1974). In post-hoc analyses the addition of a baseline rate of behavior significantly improved the fit of the exponentiated hyperbola and removed systematic residuals. The form of this function was consistent with human and animal behavior, but the estimated parameter values were not. Copyright © 2016 Elsevier B.V. All rights reserved.
When growth models are not universal: evidence from marine invertebrates
Hirst, Andrew G.; Forster, Jack
2013-01-01
The accumulation of body mass, as growth, is fundamental to all organisms. Being able to understand which model(s) best describe this growth trajectory, both empirically and ultimately mechanistically, is an important challenge. A variety of equations have been proposed to describe growth during ontogeny. Recently, the West Brown Enquist (WBE) equation, formulated as part of the metabolic theory of ecology, has been proposed as a universal model of growth. This equation has the advantage of having a biological basis, but its ability to describe invertebrate growth patterns has not been well tested against other, more simple models. In this study, we collected data for 58 species of marine invertebrate from 15 different taxa. The data were fitted to three growth models (power, exponential and WBE), and their abilities were examined using an information theoretic approach. Using Akaike information criteria, we found changes in mass through time to fit an exponential equation form best (in approx. 73% of cases). The WBE model predominantly overestimates body size in early ontogeny and underestimates it in later ontogeny; it was the best fit in approximately 14% of cases. The exponential model described growth well in nine taxa, whereas the WBE described growth well in one of the 15 taxa, the Amphipoda. Although the WBE has the advantage of being developed with an underlying proximate mechanism, it provides a poor fit to the majority of marine invertebrates examined here, including species with determinate and indeterminate growth types. In the original formulation of the WBE model, it was tested almost exclusively against vertebrates, to which it fitted well; the model does not however appear to be universal given its poor ability to describe growth in benthic or pelagic marine invertebrates. PMID:23945691
Large and small-scale structures and the dust energy balance problem in spiral galaxies
NASA Astrophysics Data System (ADS)
Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.
2015-04-01
The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.
The luminosity function of quasars
NASA Technical Reports Server (NTRS)
Pei, Yichuan C.
1995-01-01
We propose a new evolutionary model for the optical luminosity function of quasars. Our analytical model is derived from fits to the empirical luminosity function estimated by Hartwick and Schade and Warren, Hewett, and Osmer on the basis of more than 1200 quasars over the range of redshifts 0 approximately less than z approximately less than 4.5. We find that the evolution of quasars over this entire redshift range can be well fitted by a Gaussian distribution, while the shape of the luminosity function can be well fitted by either a double power law or an exponential L(exp 1/4) law. The predicted number counts of quasars, as a function of either apparent magnitude or redshift, are fully consistent with the observed ones. Our model indicates that the evolution of quasars reaches its maximum at z approximately = 2.8 and declines at higher redshifts. An extrapolation of the evolution to z approximately greater than 4.5 implies that quasars may have started their cosmic fireworks at z(sub f) approximately = 5.2-5.5. Forthcoming surveys of quasars at these redshifts will be critical to constrain the epoch of quasar formation. All the results we derived are based on observed quasars and are therefore subject to the bias of obscuration by dust in damped Ly alpha systems. Future surveys of these absorption systems at z approximately greater than 3 will also be important if the formation epoch of quasars is to be known unambiguously.
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
Obstructive sleep apnea alters sleep stage transition dynamics.
Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert
2010-06-28
Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.
NASA Astrophysics Data System (ADS)
Park, Jong-Hyeok; Kim, Ki-Beom; Chang, Heon-Young
2014-08-01
Time series of drought indices has been considered mostly in view of temporal and spatial distributions of a drought index so far. Here we investigate the statistical properties of a daily Effective Drought Index (EDI) itself for Seoul, Busan, Daegu, Mokpo for the period of 100 years from 1913 to 2012. We have found that both in dry and wet seasons the distribution of EDI as a function of EDI follows the Gaussian function. In dry season the shape of the Gaussian function is characteristically broader than that in wet seasons. The total number of drought days during the period we have analyzed is related both to the mean value and more importantly to the standard deviation. We have also found that according to the distribution of the number of occasions where the EDI values of several consecutive days are all less than a threshold, the distribution follows the exponential distribution. The slope of the best fit becomes steeper not only as the critical EDI value becomes more negative but also as the number of consecutive days increases. The slope of the exponential distribution becomes steeper as the number of the city in which EDI is simultaneously less than a critical EDI in a row increases. Finally, we conclude by pointing out implications of our findings.
Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles
NASA Astrophysics Data System (ADS)
Ciambur, Bogdan C.
2016-12-01
I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Wilczyński, Sławomir; Pilawa, Barbara; Koprowski, Robert; Wróbel, Zygmunt; Ptaszkiewicz, Marta; Swakoń, Jan; Olko, Paweł
2014-03-01
The aim of this work was to determine the concentrations and properties of free radicals in piperacillin, ampicillin, and crystalline penicillin after gamma irradiation. The radicals were studied by electron paramagnetic resonance (EPR) spectroscopy using an X-band spectrometer (9.3 GHz). Gamma irradiation was performed at a dose of 25 kGy. One- and two-exponential functions were fitted to the experimental data, in order to assess the influence of the antibiotics' storage time on the measured EPR lines. After gamma irradiation, complex EPR lines were recorded confirming the presence of a large number of free radicals formed during the irradiation. For all tested antibiotics, concentrations of free radicals and parameters of EPR spectra changed with storage time. The results obtained demonstrate that concentration of free radicals and other spectroscopic parameters can be used to select the optimal parameters of radiation sterilization of β-lactam antibiotics. The most important parameters are the constants τ (τ (1(A),(I)) and τ (2(A),(I))) and K (K (0(A),(I)), K (1(A),(I)), K (2(A),(I))) of the exponential functions that describe free radicals decay during samples storage.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
The Investigation of Optimal Discrete Approximations for Real Time Flight Simulations
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Mcvey, E. S.; Cook, G.; Henderson, K. C.
1976-01-01
The results are presented of an investigation of discrete approximations for real time flight simulation. Major topics discussed include: (1) consideration of the particular problem of approximation of continuous autopilots by digital autopilots; (2) use of Bode plots and synthesis of transfer functions by asymptotic fits in a warped frequency domain; (3) an investigation of the various substitution formulas, including the effects of nonlinearities; (4) use of pade approximation to the solution of the matrix exponential arising from the discrete state equations; and (5) an analytical integration of the state equation using interpolated input.
Hard-sphere-like dynamics in highly concentrated alpha-crystallin suspensions
Vodnala, Preeti; Karunaratne, Nuwan; Lurio, Laurence; ...
2018-02-02
The dynamics of concentrated suspensions of the eye-lens protein alpha crystallin have been measured using x-ray photon correlation spectroscopy. Measurements were made at wave vectors corresponding to the first peak in the hard-sphere structure factor and volume fractions close to the critical volume fraction for the glass transition. Langevin dynamics simulations were also performed in parallel to the experiments. The intermediate scattering function f(q,τ) could be fit using a stretched exponential decay for both experiments and numerical simulations. The measured relaxation times show good agreement with simulations for polydisperse hard-sphere colloids.
Hard-sphere-like dynamics in highly concentrated alpha-crystallin suspensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vodnala, Preeti; Karunaratne, Nuwan; Lurio, Laurence
The dynamics of concentrated suspensions of the eye-lens protein alpha crystallin have been measured using x-ray photon correlation spectroscopy. Measurements were made at wave vectors corresponding to the first peak in the hard-sphere structure factor and volume fractions close to the critical volume fraction for the glass transition. Langevin dynamics simulations were also performed in parallel to the experiments. The intermediate scattering function f(q,τ) could be fit using a stretched exponential decay for both experiments and numerical simulations. The measured relaxation times show good agreement with simulations for polydisperse hard-sphere colloids.
Hard-sphere-like dynamics in highly concentrated alpha-crystallin suspensions
NASA Astrophysics Data System (ADS)
Vodnala, Preeti; Karunaratne, Nuwan; Lurio, Laurence; Thurston, George M.; Vega, Michael; Gaillard, Elizabeth; Narayanan, Suresh; Sandy, Alec; Zhang, Qingteng; Dufresne, Eric M.; Foffi, Giuseppe; Grybos, Pawel; Kmon, Piotr; Maj, Piotr; Szczygiel, Robert
2018-02-01
The dynamics of concentrated suspensions of the eye-lens protein alpha crystallin have been measured using x-ray photon correlation spectroscopy. Measurements were made at wave vectors corresponding to the first peak in the hard-sphere structure factor and volume fractions close to the critical volume fraction for the glass transition. Langevin dynamics simulations were also performed in parallel to the experiments. The intermediate scattering function f (q ,τ ) could be fit using a stretched exponential decay for both experiments and numerical simulations. The measured relaxation times show good agreement with simulations for polydisperse hard-sphere colloids.
Empirical study on a directed and weighted bus transport network in China
NASA Astrophysics Data System (ADS)
Feng, Shumin; Hu, Baoyu; Nie, Cen; Shen, Xianghao
2016-01-01
Bus transport networks are directed complex networks that consist of routes, stations, and passenger flow. In this study, the concept of duplication factor is introduced to analyze the differences between uplinks and downlinks for the bus transport network of Harbin (BTN-H). Further, a new representation model for BTNs is proposed, named as directed-space P. Two empirical characteristics of BTN-H are reported in this paper. First, the cumulative distributions of weighted degree, degree, number of routes that connect to each station, and node weight (peak-hour trips at a station) uniformly follow the exponential law. Meanwhile, the node weight shows positive correlations with the corresponding weighted degree, degree, and number of routes that connect to a station. Second, a new richness parameter of a node is explored by its node weight and the connectivity, weighted connectivity, average shortest path length and efficiency between rich nodes can be fitted by composite exponential functions to demonstrate the rich-club phenomenon.
Hargrove, James L; Heinz, Grete; Heinz, Otto
2008-01-01
Background This study evaluated whether the changes in several anthropometric and functional measures during caloric restriction combined with walking and treadmill exercise would fit a simple model of approach to steady state (a plateau) that can be solved using spreadsheet software (Microsoft Excel®). We hypothesized that transitions in waist girth and several body compartments would fit a simple exponential model that approaches a stable steady-state. Methods The model (an equation) was applied to outcomes reported in the Minnesota starvation experiment using Microsoft Excel's Solver® function to derive rate parameters (k) and projected steady state values. However, data for most end-points were available only at t = 0, 12 and 24 weeks of caloric restriction. Therefore, we derived 2 new equations that enable model solutions to be calculated from 3 equally spaced data points. Results For the group of male subjects in the Minnesota study, body mass declined with a first order rate constant of about 0.079 wk-1. The fractional rate of loss of fat free mass, which includes components that remained almost constant during starvation, was 0.064 wk-1, compared to a rate of loss of fat mass of 0.103 wk-1. The rate of loss of abdominal fat, as exemplified by the change in the waist girth, was 0.213 wk-1. On average, 0.77 kg was lost per cm of waist girth. Other girths showed rates of loss between 0.085 and 0.131 wk-1. Resting energy expenditure (REE) declined at 0.131 wk-1. Changes in heart volume, hand strength, work capacity and N excretion showed rates of loss in the same range. The group of 32 subjects was close to steady state or had already reached steady state for the variables under consideration at the end of semi-starvation. Conclusion When energy intake is changed to new, relatively constant levels, while physical activity is maintained, changes in several anthropometric and physiological measures can be modeled as an exponential approach to steady state using software that is widely available. The 3 point method for parameter estimation provides a criterion for testing whether change in a variable can be usefully modelled with exponential kinetics within the time range for which data are available. PMID:18840293
Time-Dependent Testing Evaluation and Modeling for Rubber Stopper Seal Performance.
Zeng, Qingyu; Zhao, Xia
2018-01-01
Sufficient rubber stopper sealing performance throughout the entire sealed product life cycle is essential for maintaining container closure integrity in the parenteral packaging industry. However, prior publications have lacked systematic considerations for the time-dependent influence on sealing performance that results from the viscoelastic characteristics of the rubber stoppers. In this paper, we report results of an effort to study these effects by applying both compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. By employing both testing evaluations and modeling calculations, an in-depth understanding of the time-dependent effects on rubber stopper sealing force was developed. Both testing and modeling data show good consistency, demonstrating that the sealing force decays exponentially over time and eventually levels off because of the viscoelastic nature of the rubber stoppers. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. The modeling fit with capability to handle actual testing data can be employed as a tool to calculate the compression stress relaxation and residual seal force throughout the entire sealed product life cycle. In addition to being time-dependent, stress relaxation is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the parenteral packaging industry for practically and proactively considering, designing, setting up, controlling, and managing stopper sealing performance throughout the entire sealed product life cycle. LAY ABSTRACT: Historical publications in the parenteral packaging industry have lacked systematic considerations for the time-dependent influence on the sealing performance that results from effects of viscoelastic characteristic of the rubber stoppers. This study applied compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. Experimental and modeling data show good consistency, demonstrating that sealing force decays exponentially over time and eventually levels off. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. In addition to being time-dependent stress relaxation, it is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the industry for practically and proactively considering, designing, setting up, controlling, and managing of the stopper sealing performance throughout the entire sealed product life cycle. © PDA, Inc. 2018.
Equivalences between nonuniform exponential dichotomy and admissibility
NASA Astrophysics Data System (ADS)
Zhou, Linfeng; Lu, Kening; Zhang, Weinian
2017-01-01
Relationship between exponential dichotomies and admissibility of function classes is a significant problem for hyperbolic dynamical systems. It was proved that a nonuniform exponential dichotomy implies several admissible pairs of function classes and conversely some admissible pairs were found to imply a nonuniform exponential dichotomy. In this paper we find an appropriate admissible pair of classes of Lyapunov bounded functions which is equivalent to the existence of nonuniform exponential dichotomy on half-lines R± separately, on both half-lines R± simultaneously, and on the whole line R. Additionally, the maximal admissibility is proved in the case on both half-lines R± simultaneously.
Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations
Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.
2012-01-01
When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Revill, Ann L; Fuglevand, Andrew J
2017-01-01
Motor neurons are the output neurons of the central nervous system and are responsible for controlling muscle contraction. When initially activated during voluntary contraction, firing rates of motor neurons increase steeply but then level out at modest rates. Activation of an intrinsic source of excitatory current at recruitment onset may underlie the initial steep increase in firing rate in motor neurons. We attempted to disable this intrinsic excitatory current by artificially activating an inhibitory reflex. When motor neuron activity was recorded while the inhibitory reflex was engaged, firing rates no longer increased steeply, suggesting that the intrinsic excitatory current was probably responsible for the initial sharp rise in motor neuron firing rate. During graded isometric contractions, motor unit (MU) firing rates increase steeply upon recruitment but then level off at modest rates even though muscle force continues to increase. The mechanisms underlying such firing behaviour are not known although activation of persistent inward currents (PICs) might be involved. PICs are intrinsic, voltage-dependent currents that activate strongly when motor neurons (MNs) are first recruited. Such activation might cause a sharp escalation in depolarizing current and underlie the steep initial rise in MU firing rate. Because PICs can be disabled with synaptic inhibition, we hypothesized that artificial activation of an inhibitory pathway might curb this initial steep rise in firing rate. To test this, human subjects performed slow triangular ramp contractions of the ankle dorsiflexors in the absence and presence of tonic synaptic inhibition delivered to tibialis anterior (TA) MNs by sural nerve stimulation. Firing rate profiles (expressed as a function of contraction force) of TA MUs recorded during these tasks were compared for control and stimulation conditions. Under control conditions, during the ascending phase of the triangular contractions, 93% of the firing rate profiles were best fitted by rising exponential functions. With stimulation, however, firing rate profiles were best fitted with linear functions or with less steeply rising exponentials. Firing rate profiles for the descending phases of the contractions were best fitted with linear functions for both control and stimulation conditions. These results seem consistent with the idea that PICs contribute to non-linear firing rate profiles during ascending but not descending phases of contractions. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Least squares techniques were applied for parameter estimation of functions to predict winter wheat phenological stage with daily maximum temperature, minimum temperature, daylength, and precipitation as independent variables. After parameter estimation, tests were conducted using independent data. It may generally be concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson triquadratic form, in general use for spring wheat, yielded good results, but special techniques and care are required. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with averaged daily environmental values as independent variables.
Fluorescence and afterglow of Ca2Sn2Al2O9:Mn2+
NASA Astrophysics Data System (ADS)
Takemoto, Minoru; Iseki, Takahiro
2018-03-01
By using a polymerized complex method, we synthesized manganese (Mn)-doped Ca2Sn2Al2O9, which exhibits yellow fluorescence and afterglow at room temperature when excited by UV radiation. The material emits a broad, featureless fluorescence band centered at 564 nm, which we attribute to the presence of Mn2+ ions. The afterglow decay is well fit by a power-law function, rather than an exponential function. In addition, thermoluminescence analyses demonstrate that two different types of electron traps form in this material. Based on experimental results, we conclude that the fluorescence and afterglow both result from thermally assisted tunneling, in which trapped electrons are thermally excited to higher-level traps and subsequently tunnel to recombination centers.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
Wyllie, David J A; Béhé, Philippe; Colquhoun, David
1998-01-01
We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
NASA Astrophysics Data System (ADS)
Hu, Li; Zhao, Nanjing; Liu, Wenqing; Meng, Deshuo; Fang, Li; Wang, Yin; Yu, Yang; Ma, Mingjun
2015-08-01
Heavy metals in water can be deposited on graphite flakes, which can be used as an enrichment method for laser-induced breakdown spectroscopy (LIBS) and is studied in this paper. The graphite samples were prepared with an automatic device, which was composed of a loading and unloading module, a quantitatively adding solution module, a rapid heating and drying module and a precise rotating module. The experimental results showed that the sample preparation methods had no significant effect on sample distribution and the LIBS signal accumulated in 20 pulses was stable and repeatable. With an increasing amount of the sample solution on the graphite flake, the peak intensity at Cu I 324.75 nm accorded with the exponential function with a correlation coefficient of 0.9963 and the background intensity remained unchanged. The limit of detection (LOD) was calculated through linear fitting of the peak intensity versus the concentration. The LOD decreased rapidly with an increasing amount of sample solution until the amount exceeded 20 mL and the correlation coefficient of exponential function fitting was 0.991. The LOD of Pb, Ni, Cd, Cr and Zn after evaporating different amounts of sample solution on the graphite flakes was measured and the variation tendency of their LOD with sample solution amounts was similar to the tendency for Cu. The experimental data and conclusions could provide a reference for automatic sample preparation and heavy metal in situ detection. supported by National Natural Science Foundation of China (No. 60908018), National High Technology Research and Development Program of China (No. 2013AA065502) and Anhui Province Outstanding Youth Science Fund of China (No. 1108085J19)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burm, A.G.; Van Kleef, J.W.; Vermeulen, N.P.
1988-10-01
The pharmacokinetics of lidocaine and bupivacaine following subarachnoid administration were studied in 12 surgical patients using a stable isotope method. After subarachnoid administration of the agent to be evaluated, a deuterium-labelled analogue was administered intravenously. Blood samples were collected for 24 h. Plasma concentrations of the unlabelled and the deuterium-labelled local anesthetics were determined using a combination of capillary gas chromatography and mass fragmentography. Bi-exponential functions were fitted to the plasma concentration-time data of the deuterium-labelled local anesthetics. The progression of the absorption was evaluated using deconvolution. Mono- and bi-exponential functions were then fitted to the fraction absorbed versus timemore » data. The distribution and elimination half-lives of the deuterium-labelled analogues were 25 +/- 13 min (mean +/- SD) and 121 +/- 31 min for lidocaine and 19 +/- 10 min and 131 +/- 33 min for bupivacaine. The volumes of the central compartment and steady-state volumes of distribution were: lidocaine 57 +/- 10 l and 105 +/- 25 l, bupivacaine 25 +/- 6 l and 63 +/- 22 l. Total plasma clearance values averaged 0.97 +/- 0.21 l/min for lidocaine and 0.56 +/- 0.14 l/min for bupivacaine. The absorption of lidocaine could be described by a single first order absorption process, characterized by a half-life of 71 +/- 17 min in five out of six patients. The absorption of bupivacaine could be described adequately assuming two parallel first order absorption processes in all six patients. The half-lives, characterizing the fast and slow absorption processes of bupivacaine, were 50 +/- 27 min and 408 +/- 275 min, respectively. The fractions of the dose, absorbed in the fast and slow processes, were 0.35 +/- 0.17 and 0.61 +/- 0.16, respectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
NASA Astrophysics Data System (ADS)
Sousa, Felipe L. N.; Mojica-Sánchez, Lizeth C.; Gavazza, Sávia; Florencio, Lourdinha; Vaz, Elaine C. R.; Santa-Cruz, Petrus A.
2016-04-01
This work presents ‘intelligent papers’ obtained by functional inks printed on cellulose-sheets by DoD inkjet technology and their performance as a photonic device for UV-radiation dosimetry. The dosimeter operation is based on the photodegradation of the active part of a photonic ink, btfa (4,4,4-trifluoro-1-phenyl-1,3-butanedione) ligands in Eu(III) complex, as a function of the UV dose (Jcm-2), and the one-way device is read by the luminescence quenching of (5D0 → 7F2) Eu3+ transition after UV exposure of the printed paper. The printed dosimeter presented an exponential behavior, measured here up to 10 Jcm-2 for UV-A, UV-B and UV-C, and it was shown that the number of jetted layers could fit the dosimeter sensitivity.
NASA Astrophysics Data System (ADS)
Dhariwal, Rohit; Bragg, Andrew D.
2018-03-01
In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.
Effects of Economy Type and Nicotine on the Essential Value of Food in Rats
Cassidy, Rachel N; Dallery, Jesse
2012-01-01
The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action. PMID:22389525
Effects of economy type and nicotine on the essential value of food in rats.
Cassidy, Rachel N; Dallery, Jesse
2012-03-01
The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action.
Wang, M D; Fan, W H; Qiu, W S; Zhang, Z L; Mo, Y N; Qiu, F
2014-06-01
We present here the exponential function which transforms the Abbreviated Injury Scale (AIS). It is called the Exponential Injury Severity Score (EISS), and significantly outperforms the venerable but dated New Injury Severity Score (NISS) and Injury Severity Score (ISS) as a predictor of mortality. The EISS is defined as a change of AIS values by raising each AIS severity score (1-6) by 3 taking a power of AIS minus 2 and then summing the three most severe injuries (i.e., highest AIS), regardless of body regions. EISS values were calculated for every patient in two large independent data sets: 3,911 and 4,129 patients treated during a 6-year period at the Class A tertiary hospitals in China. The power of the EISS to predict mortality was then compared with previously calculated NISS values for the same patients in each of the two data sets. We found that the EISS is more predictive of survival [Zhejiang: area under the receiver operating characteristic curve (AUC): NISS = 0.932, EISS = 0.949, P = 0.0115; Liaoning: AUC: NISS = 0.924, EISS = 0.942, P = 0.0139]. Moreover, the EISS provides a better fit throughout its entire range of prediction (Hosmer-Lemeshow statistic for Zhejiang: NISS = 21.86, P = 0.0027, EISS = 13.52, P = 0.0604; Liaoning: NISS = 23.27, P = 0.0015, EISS = 15.55, P = 0.0164). The EISS may be used as the standard summary measure of human trauma.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Study of static and dynamic magnetic properties of Fe nanoparticles composited with activated carbon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Satyendra Prakash, E-mail: sppal85@gmail.com; Department of Physical Sciences, Indian Institute of Science Education and Research, Mohali, Knowledge city, Sector81, SAS Nagar, Manauli-140306, Punjab; Kaur, Guratinder
2016-05-23
Nanocomposite of Fe nanoparticles with activated carbon has been synthesized to alter the magnetic spin-spin interaction and hence study the dilution effect on the static and dynamic magnetic properties of the Fe nanoparticle system. Transmission electron microscopic (TEM) image shows the spherical Fe nanoparticles dispersed in carbon matrix with 13.8 nm particle size. Temperature dependent magnetization measurement does not show any blocking temperature at all, right up to the room temperature. Magnetic hysteresis curve, taken at 300 K, shows small value of the coercivity and this small hysteresis indicates the presence of an energy barrier and inherent magnetization dynamics. Langevinmore » function fitting of the hysteresis curve gives almost similar value of particle size as obtained from TEM analysis. Magnetic relaxation data, taken at a temperature of 100 K, were fitted with a combination of two exponentially decaying function. This diluted form of nanoparticle system, which has particles size in the superparamagnetic limit, behaves like a dilute ensemble of superspins with large value of the magnetic anisotropic barrier.« less
Neutron light output response and resolution functions in EJ-309 liquid scintillation detectors
Enqvist, Andreas; Lawrence, Christopher C.; Wieger, Brian M.; ...
2013-03-26
Here, the neutron light output response functions and detector resolution functions were measured at Ohio University's tandem Van de Graaff generator for three cylindrical EJ-309 liquid scintillator cells, having dimensions 12.7(circle divide)-by-12.7, 7.6-by-7.6, and 7.6-by-5.1 cm. A 7.44 MeV deuteron beam was used on an Al-27 target generating a continuous spectrum over the energy range from a few hundred keV to over 10 MeV. The light output response functions are determined using an exponential fit. Detector resolution functions are obtained for the 12.7-by-12.7 and 7.6-by-7.6 cm detectors. It is demonstrated that the dependence on detector size is important for themore » light output response functions, but not to the same extent for the resolution function, even when photomultiplier tubes, detector material, and other detector characteristics are carefully matched.« less
PREdator: a python based GUI for data analysis, evaluation and fitting
2014-01-01
The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Model of flare lightcurve profile observed in soft X-rays
NASA Astrophysics Data System (ADS)
Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz
We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.
[Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].
Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo
2009-11-01
The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.
Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F
2015-03-01
In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Environmental Noise Could Promote Stochastic Local Stability of Behavioral Diversity Evolution
NASA Astrophysics Data System (ADS)
Zheng, Xiu-Deng; Li, Cong; Lessard, Sabin; Tao, Yi
2018-05-01
In this Letter, we investigate stochastic stability in a two-phenotype evolutionary game model for an infinite, well-mixed population undergoing discrete, nonoverlapping generations. We assume that the fitness of a phenotype is an exponential function of its expected payoff following random pairwise interactions whose outcomes randomly fluctuate with time. We show that the stochastic local stability of a constant interior equilibrium can be promoted by the random environmental noise even if the system may display a complicated nonlinear dynamics. This result provides a new perspective for a better understanding of how environmental fluctuations may contribute to the evolution of behavioral diversity.
Cell growth and catecholase production for Polyporus versicolor in submerged culture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroad, P.A.; Wilke, C.R.
1977-04-01
Cell growth and catecholase production for Polyporus versicolor (ATCC 12679) were studied in mechanically agitated submerged culture, as functions of temperature. The exponential-phase growth rate exhibited a maximum at 28/sup 0/C. Over the range of 20/sup 0/C to approximately 30/sup 0/C, both cell mass and enzyme yield factors were constant. At higher temperatures (30 to 40/sup 0/C) cell mass yield factor decreased and enzyme yield factor increased. Specific respiration rate of P. versicolor was determined. Thermal deactivation of catecholase was investigated between 30 and 50/sup 0/C, and deactivation rates were fit to an Arrhenius rate expression.
Potential energy function for CH3+CH3 ⇆ C2H6: Attributes of the minimum energy path
NASA Astrophysics Data System (ADS)
Robertson, S. H.; Wardlaw, D. M.; Hirst, D. M.
1993-11-01
The region of the potential energy surface for the title reaction in the vicinity of its minimum energy path has been predicted from the analysis of ab initio electronic energy calculations. The ab initio procedure employs a 6-31G** basis set and a configuration interaction calculation which uses the orbitals obtained in a generalized valence bond calculation. Calculated equilibrium properties of ethane and of isolated methyl radical are compared to existing theoretical and experimental results. The reaction coordinate is represented by the carbon-carbon interatomic distance. The following attributes are reported as a function of this distance and fit to functional forms which smoothly interpolate between reactant and product values of each attribute: the minimum energy path potential, the minimum energy path geometry, normal mode frequencies for vibrational motion orthogonal to the reaction coordinate, a torsional potential, and a fundamental anharmonic frequency for local mode, out-of-plane CH3 bending (umbrella motion). The best representation is provided by a three-parameter modified Morse function for the minimum energy path potential and a two-parameter hyperbolic tangent switching function for all other attributes. A poorer but simpler representation, which may be satisfactory for selected applications, is provided by a standard Morse function and a one-parameter exponential switching function. Previous applications of the exponential switching function to estimate the reaction coordinate dependence of the frequencies and geometry of this system have assumed the same value of the range parameter α for each property and have taken α to be less than or equal to the ``standard'' value of 1.0 Å-1. Based on the present analysis this is incorrect: The α values depend on the property and range from ˜1.2 to ˜1.8 Å-1.
NASA Astrophysics Data System (ADS)
Chen, Zhongjing; Zhang, Xing; Pu, Yudong; Yan, Ji; Huang, Tianxuan; Jiang, Wei; Yu, Bo; Chen, Bolun; Tang, Qi; Song, Zifeng; Chen, Jiabin; Zhan, Xiayu; Liu, Zhongjie; Xie, Xufei; Jiang, Shaoen; Liu, Shenye
2018-02-01
The accuracy of the determination of the burn-averaged ion temperature of inertial confinement fusion implosions depends on the unfold process, including deconvolution and convolution methods, and the function, i.e., the detector response, used to fit the signals measured by neutron time-of-flight (nToF) detectors. The function given by Murphy et al. [Rev. Sci. Instrum. 68(1), 610-613 (1997)] has been widely used in Nova, Omega, and NIF. There are two components, i.e., fast and slow, and the contribution of scattered neutrons has not been dedicatedly considered. In this work, a new function, based on Murphy's function has been employed to unfold nToF signals. The contribution of scattered neutrons is easily included by the convolution of a Gaussian response function and an exponential decay. The ion temperature is measured by nToF with the new function. Good agreement with the ion temperature determined by the deconvolution method has been achieved.
Exponential localization of Wannier functions in insulators.
Brouder, Christian; Panati, Gianluca; Calandra, Matteo; Mourougane, Christophe; Marzari, Nicola
2007-01-26
The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.
NASA Astrophysics Data System (ADS)
Vaninsky, Alexander
2015-04-01
Defining the logarithmic function as a definite integral with a variable upper limit, an approach used by some popular calculus textbooks, is problematic. We discuss the disadvantages of such a definition and provide a way to fix the problem. We also consider a definition-based, rigorous derivation of the derivative of the exponential function that is easier, more intuitive, and complies with the standard definitions of the number e, the logarithmic, and the exponential functions.
Treatment of late time instabilities in finite-difference EMP scattering codes
NASA Astrophysics Data System (ADS)
Simpson, L. T.; Holland, R.; Arman, S.
1982-12-01
Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.
Two solar proton fluence models based on ground level enhancement observations
NASA Astrophysics Data System (ADS)
Raukunen, Osku; Vainio, Rami; Tylka, Allan J.; Dietrich, William F.; Jiggens, Piers; Heynderickx, Daniel; Dierckxsens, Mark; Crosby, Norma; Ganse, Urs; Siipola, Robert
2018-01-01
Solar energetic particles (SEPs) constitute an important component of the radiation environment in interplanetary space. Accurate modeling of SEP events is crucial for the mitigation of radiation hazards in spacecraft design. In this study we present two new statistical models of high energy solar proton fluences based on ground level enhancement (GLE) observations during solar cycles 19-24. As the basis of our modeling, we utilize a four parameter double power law function (known as the Band function) fits to integral GLE fluence spectra in rigidity. In the first model, the integral and differential fluences for protons with energies between 10 MeV and 1 GeV are calculated using the fits, and the distributions of the fluences at certain energies are modeled with an exponentially cut-off power law function. In the second model, we use a more advanced methodology: by investigating the distributions and relationships of the spectral fit parameters we find that they can be modeled as two independent and two dependent variables. Therefore, instead of modeling the fluences separately at different energies, we can model the shape of the fluence spectrum. We present examples of modeling results and show that the two methodologies agree well except for a short mission duration (1 year) at low confidence level. We also show that there is a reasonable agreement between our models and three well-known solar proton models (JPL, ESP and SEPEM), despite the differences in both the modeling methodologies and the data used to construct the models.
Elastic constants and dynamics in nematic liquid crystals
NASA Astrophysics Data System (ADS)
Humpert, Anja; Allen, Michael P.
2015-09-01
In this paper, we present molecular dynamics calculations of the Frank elastic constants, and associated time correlation functions, in nematic liquid crystals. We study two variants of the Gay-Berne potential, and use system sizes of half a million molecules, significantly larger than in previous studies of elastic behaviour. Equilibrium orientational fluctuations in reciprocal (k-) space were calculated, to determine the elastic constants by fitting at low |k|; our results indicate that small system size may be a source of inaccuracy in previous work. Furthermore, the dynamics of the Gay-Berne nematic were studied by calculating time correlation functions of components of the order tensor, together with associated components of the velocity field, for a set of wave vectors k. Confirming our earlier work, we found exponential decay for splay and twist correlations, and oscillatory exponential decay for the bend correlation. In this work, we confirm similar behaviour for the corresponding velocity components. In all cases, the decay rates, and oscillation frequencies, were found to be accurately proportional to k2 for small k, as predicted by the equations of nematodynamics. However, the observation of oscillatory bend fluctuations, and corresponding oscillatory shear flow decay, is in contradiction to the usual assumptions appearing in the literature, and in standard texts. We discuss the advantages and drawbacks of using large systems in these calculations.
Hinrichs, Ruth; Frank, Paulo Ricardo Ost; Vasconcellos, M A Z
2017-03-01
Modifications of cotton and polyester textiles due to shots fired at short range were analyzed with a variable pressure scanning electron microscope (VP-SEM). Different mechanisms of fiber rupture as a function of fiber type and shooting distance were detected, namely fusing, melting, scorching, and mechanical breakage. To estimate the firing distance, the approximately exponential decay of GSR coverage as a function of radial distance from the entrance hole was determined from image analysis, instead of relying on chemical analysis with EDX, which is problematic in the VP-SEM. A set of backscattered electron images, with sufficient magnification to discriminate micrometer wide GSR particles, was acquired at different radial distances from the entrance hole. The atomic number contrast between the GSR particles and the organic fibers allowed to find a robust procedure to segment the micrographs into binary images, in which the white pixel count was attributed to GSR coverage. The decrease of the white pixel count followed an exponential decay, and it was found that the reciprocal of the decay constant, obtained from the least-square fitting of the coverage data, showed a linear dependence on the shooting distance. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Trajectory prediction of saccadic eye movements using a compressed exponential model
Han, Peng; Saunders, Daniel R.; Woods, Russell L.; Luo, Gang
2013-01-01
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays. PMID:23902753
NASA Technical Reports Server (NTRS)
Atwell, William; Tylka, Allan; Dietrich, William; Badavi, Francis; Rojdev, Kristina
2011-01-01
Several methods for analyzing the particle spectra from extremely large solar proton events, called Ground-Level Enhancements (GLEs), have been developed and utilized by the scientific community to describe the solar proton energy spectra and have been further applied to ascertain the radiation exposures to humans and radio-sensitive systems, namely electronics. In this paper 12 GLEs dating back to 1956 are discussed, and the three methods for describing the solar proton energy spectra are reviewed. The three spectral fitting methodologies are EXP [an exponential in proton rigidity (R)], WEIB [Weibull fit: an exponential in proton energy], and the Band function (BAND) [a double power law in proton rigidity]. The EXP and WEIB methods use low energy (MeV) GLE solar proton data and make extrapolations out to approx.1 GeV. On the other hand, the BAND method utilizes low- and medium-energy satellite solar proton data combined with high-energy solar proton data deduced from high-latitude neutron monitoring stations. Thus, the BAND method completely describes the entire proton energy spectrum based on actual solar proton observations out to 10 GeV. Using the differential spectra produced from each of the 12 selected GLEs for each of the three methods, radiation exposures are presented and discussed in detail. These radiation exposures are then compared with the current 30-day and annual crew exposure limits and the radiation effects to electronics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hounkonnou, Mahouton Norbert; Nkouankam, Elvis Benzo Ngompe
2010-10-15
From the realization of q-oscillator algebra in terms of generalized derivative, we compute the matrix elements from deformed exponential functions and deduce generating functions associated with Rogers-Szego polynomials as well as their relevant properties. We also compute the matrix elements associated with the (p,q)-oscillator algebra (a generalization of the q-one) and perform the Fourier-Gauss transform of a generalization of the deformed exponential functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freitas, R. J.; Shimakawa, K.; Department of Electrical and Electronic Engineering, Gifu University, Gifu 501-1193
The article discusses the dynamics of photoinduced defect creations (PDC) in amorphous chalcogenides, which is described by the stretched exponential function (SEF), while the well known photodarkening (PD) and photoinduced volume expansion (PVE) are governed only by the exponential function. It is shown that the exponential distribution of the thermal activation barrier produces the SEF in PDC, suggesting that thermal energy, as well as photon energy, is incorporated in PDC mechanisms. The differences in dynamics among three major photoinduced effects (PD, PVE, and PDC) in amorphous chalcogenides are now well understood.
NASA Astrophysics Data System (ADS)
Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei
2017-01-01
In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as ;the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its benefits, we consider the experimental data associated with typical working conditions of the injection system in a common rail diesel engine. In particular, the flow rate of the injected fuel is considered at different reference rail pressures. The measured data are treated by the proposed algorithm to verify the adherence to the proposed general theory. The obtained results demonstrate the undoubted effectiveness of the proposed theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
Zarr, Noah; Alexander, William H.; Brown, Joshua W.
2014-01-01
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662
NASA Astrophysics Data System (ADS)
Féry, C.; Racine, B.; Vaufrey, D.; Doyeux, H.; Cinà, S.
2005-11-01
The main process responsible for the luminance degradation in organic light-emitting diodes (OLEDs) driven under constant current has not yet been identified. In this paper, we propose an approach to describe the intrinsic mechanisms involved in the OLED aging. We first show that a stretched exponential decay can be used to fit almost all the luminance versus time curves obtained under different driving conditions. In this way, we are able to prove that they can all be described by employing a single free parameter model. By using an approach based on local relaxation events, we will demonstrate that a single mechanism is responsible for the dominant aging process. Furthermore, we will demonstrate that the main relaxation event is the annihilation of one emissive center. We then use our model to fit all the experimental data measured under different driving condition, and show that by carefully fitting the accelerated luminance lifetime-curves, we can extrapolate the low-luminance lifetime needed for real display applications, with a high degree of accuracy.
Boy, Nikolas; Heringer, Jana; Haege, Gisela; Glahn, Esther M; Hoffmann, Georg F; Garbade, Sven F; Kölker, Stefan; Burgard, Peter
2015-12-22
Glutaric aciduria type I (GA-I) is an inherited metabolic disease due to deficiency of glutaryl-CoA dehydrogenase (GCDH). Cognitive functions are generally thought to be spared, but have not yet been studied in detail. Thirty patients detected by newborn screening (n = 13), high-risk screening (n = 3) or targeted metabolic testing (n = 14) were studied for simple reaction time (SRT), continuous performance (CP), visual working memory (VWM), visual-motor coordination (Tracking) and visual search (VS). Dystonia (n = 13 patients) was categorized using the Barry-Albright-Dystonia Scale (BADS). Patients were compared with 196 healthy controls. Developmental functions of cognitive performances were analysed using a negative exponential function model. BADS scores correlated with speed tests but not with tests measuring stability or higher cognitive functions without time constraints. Developmental functions of GA-I patients significantly differed from controls for SRT and VS but not for VWM and showed obvious trends for CP and Tracking. Dystonic patients were slower in SRT and CP but reached their asymptote of performance similar to asymptomatic patients and controls in all tests. Asymptomatic patients did not differ from controls, except showing significantly better results in Tracking and a trend for slower reactions in visual search. Data across all age groups of patients and controls fitted well to a model of negative exponential development. Dystonic patients predominantly showed motor speed impairment, whereas performance improved with higher cognitive load. Patients without motor symptoms did not differ from controls. Developmental functions of cognitive performances were similar in patients and controls. Performance in tests with higher cognitive demand might be preserved in GA-I, even in patients with striatal degeneration.
Niebuhr, Tobias; Junge, Mirko; Achmus, Stefanie
2013-01-01
Injury risk assessment plays a pivotal role in the assessment of the effectiveness of Advanced Driver Assistance Systems (ADAS) as they specify the injury reduction potential of the system. The usual way to describe injury risks is by use of injury risk functions, i.e. specifying the probability of an injury of a given severity occurring at a specific technical accident severity (collision speed). A method for the generation of a family of risk functions for different levels of injury severity is developed. The injury severity levels are determined by use of a rescaled version of the Injury Severity Score (ISS) namely the ISSx. The injury risk curves for each collision speed is then obtained by fixing the boundary conditions and use of a case-by-case validated GIDAS subset of pedestrian-car accidents (N=852). The resultant functions are of exponential form as opposed to the frequently used logistic regression form. The exponential approach in combination with the critical speed value creates a new injury risk pattern better fitting for high speed/high energy crashes. Presented is a family of pedestrian injury risk functions for an arbitrary injury severity. Thus, the effectiveness of an ADAS can be assessed for mitigation of different injury severities using the same injury risk function and relying on the internal soundness of the risk function with regard to different injury severity levels. For the assessment of emergency braking ADAS, a Zone of Effective Endangerment Increase (ZEEI), the speed interval in which a one percent speed increase results at least in a one percent of injury risk increase, is defined. The methodology presented is kept in such general terms that a direct adaption to other accident configurations is easily done. PMID:24406954
NASA Astrophysics Data System (ADS)
Sonam, Sonam; Jain, Vikrant
2017-04-01
River long profile is one of the fundamental geomorphic parameters which provides a platform to study interaction of geological and geomorphic processes at different time scales. Long profile shape is governed by geological processes at 10 ^ 5 - 10 ^ 6 years' time scale and it controls the modern day (10 ^ 0 - 10 ^ 1 years' time scale) fluvial processes by controlling the spatial variability of channel slope. Identification of an appropriate model for river long profile may provide a tool to analyse the quantitative relationship between basin geology, profile shape and its geomorphic effectiveness. A systematic analysis of long profiles has been carried for the Himalayan tributaries of the Ganga River basin. Long profile shape and stream power distribution pattern is derived using SRTM DEM data (90 m spatial resolution). Peak discharge data from 34 stations is used for hydrological analysis. Lithological variability and major thrusts are marked along the river long profile. The best fit of long profile is analysed for power, logarithmic and exponential function. Second order exponential function provides the best representation of long profiles. The second order exponential equation is Z = K1*exp(-β1*L) + K2*exp(-β2*L), where Z is elevation of channel long profile, L is the length, K and β are coefficients of the exponential function. K1 and K2 are the proportion of elevation change of the long profile represented by β1 (fast) and β2 (slow) decay coefficients of the river long profile. Different values of coefficients express the variability in long profile shapes and is related with the litho-tectonic variability of the study area. Channel slope of long profile is estimated taking the derivative of exponential function. Stream power distribution pattern along long profile is estimated by superimposing the discharge and long profile slope. Sensitivity analysis of stream power distribution with decay coefficients of the second order exponential equation is evaluated for a range of coefficient values. Our analysis suggests that the amplitude of stream power peak value is dependent on K1, the proportion of elevation change coming under the fast decay exponent and the location of stream power peak is dependent of the long profile decay coefficient (β1). Different long profile shapes owing to litho-tectonic variability across the Himalayas are responsible for spatial variability of stream power distribution pattern. Most of the stream power peaks lie in the Higher Himalaya. In general, eastern rivers have higher stream power in hinterland area and low stream power in the alluvial plains. This is responsible for, 1) higher erosion rate and sediment supply in hinterland of eastern rivers, 2) the incised and stable nature of channels in the western alluvial plains and 3) aggrading channels with dynamic nature in the eastern alluvial plains. Our study shows that the spatial variability of litho-units defines the coefficients of long profile function which in turn controls the position and magnitude of stream power maxima and hence the geomorphic variability in a fluvial system.
Exponential 6 parameterization for the JCZ3-EOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGee, B.C.; Hobbs, M.L.; Baer, M.R.
1998-07-01
A database has been created for use with the Jacobs-Cowperthwaite-Zwisler-3 equation-of-state (JCZ3-EOS) to determine thermochemical equilibrium for detonation and expansion states of energetic materials. The JCZ3-EOS uses the exponential 6 intermolecular potential function to describe interactions between molecules. All product species are characterized by r*, the radius of the minimum pair potential energy, and {var_epsilon}/k, the well depth energy normalized by Boltzmann`s constant. These parameters constitute the JCZS (S for Sandia) EOS database describing 750 gases (including all the gases in the JANNAF tables), and have been obtained by using Lennard-Jones potential parameters, a corresponding states theory, pure liquid shockmore » Hugoniot data, and fit values using an empirical EOS. This database can be used with the CHEETAH 1.40 or CHEETAH 2.0 interface to the TIGER computer program that predicts the equilibrium state of gas- and condensed-phase product species. The large JCZS-EOS database permits intermolecular potential based equilibrium calculations of energetic materials with complex elemental composition.« less
Squared exponential covariance function for prediction of hydrocarbon in seabed logging application
NASA Astrophysics Data System (ADS)
Mukhtar, Siti Mariam; Daud, Hanita; Dass, Sarat Chandra
2016-11-01
Seabed Logging technology (SBL) has progressively emerged as one of the demanding technologies in Exploration and Production (E&P) industry. Hydrocarbon prediction in deep water areas is crucial task for a driller in any oil and gas company as drilling cost is very expensive. Simulation data generated by Computer Software Technology (CST) is used to predict the presence of hydrocarbon where the models replicate real SBL environment. These models indicate that the hydrocarbon filled reservoirs are more resistive than surrounding water filled sediments. Then, as hydrocarbon depth is increased, it is more challenging to differentiate data with and without hydrocarbon. MATLAB is used for data extractions for curve fitting process using Gaussian process (GP). GP can be classified into regression and classification problems, where this work only focuses on Gaussian process regression (GPR) problem. Most popular choice to supervise GPR is squared exponential (SE), as it provides stability and probabilistic prediction in huge amounts of data. Hence, SE is used to predict the presence or absence of hydrocarbon in the reservoir from the data generated.
On the Matrix Exponential Function
ERIC Educational Resources Information Center
Hou, Shui-Hung; Hou, Edwin; Pang, Wan-Kai
2006-01-01
A novel and simple formula for computing the matrix exponential function is presented. Specifically, it can be used to derive explicit formulas for the matrix exponential of a general matrix A satisfying p(A) = 0 for a polynomial p(s). It is ready for use in a classroom and suitable for both hand as well as symbolic computation.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang; Solomonoff, Alex; Vandeven, Herve
1992-01-01
It is well known that the Fourier series of an analytic or periodic function, truncated after 2N+1 terms, converges exponentially with N, even in the maximum norm, although the function is still analytic. This is known as the Gibbs phenomenon. Here, we show that the first 2N+1 Fourier coefficients contain enough information about the function, so that an exponentially convergent approximation (in the maximum norm) can be constructed.
Aging Effects on Microstructure and Creep in Sn-3.8Ag-0.7Cu Solder
2007-09-01
demonstrated that the primary creep data for ball joints can be fitted well to exponential law. Fit parameters for the tests accomplished at 250C...MICROSTRUCTURE AND CREEP IN Sn-3.8Ag-0.7Cu SOLDER by Orlando Cornejo September 2007 Thesis Advisor: Indranath Dutta THIS PAGE...collection of information, including suggestions for reducing this burden, to Washington headquarters Services , Directorate for Information
Rainfall continuous time stochastic simulation for a wet climate in the Cantabric Coast
NASA Astrophysics Data System (ADS)
Rebole, Juan P.; Lopez, Jose J.; Garcia-Guzman, Adela
2010-05-01
Rain is the result of a series of complex atmospheric processes which are influenced by numerous factors. This complexity makes its simulation practically unfeasible from a physical basis, advising the use of stochastic diagrams. These diagrams, which are based on observed characteristics (Todorovic and Woolhiser, 1975), allow the introduction of renewal alternating processes, that account for the occurrence of rainfall at different time lapses (Markov chains are a particular case, where lapses can be described by exponential distributions). Thus, a sequential rainfall process can be defined as a temporal series in which rainfall events (periods in which rainfall is recorded) alternate with non rain events (periods in which no rainfall is recorded). The variables of a temporal rain sequence have been characterized (duration of the rainfall event, duration of the non rainfall event, average intensity of the rain in the rain event, and a temporal distribution of the amount of rain in the rain event) in a wet climate such as that of the coastal area of Guipúzcoa. The study has been performed from two series recorded at the meteorological stations of Igueldo-San Sebastián and Fuenterrabia / Airport (data every ten minutes and for its hourly aggregation). As a result of this work, the variables satisfactorily fitted the following distribution functions: the duration of the rain event to a exponential function; the duration of the dry event to a truncated exponential mixed distribution; the average intensity to a Weibull distribution; and the distribution of the rain fallen to the Beta distribution. The characterization was made for an hourly aggregation of the recorded interval of ten minutes. The parameters of the fitting functions were better obtained by means of the maximum likelihood method than the moment method. The parameters obtained from the characterization were used to develop a stochastic rainfall process simulation model by means of a three states Markov chain (Hutchinson, 1990), performed in an hourly basis by García-Guzmán (1993) and Castro et al. (1997, 2005 ). Simulation process results were valid in the hourly case for all the four described variables, with a slightly better response in Fuenterrabia than in Igueldo. In summary, all the variables were better simulated in Fuenterrabia than in Igueldo. Fuenterrabia data series is shorter and with longer sequences without missing data, compared to Igueldo. The latter shows higher number of missing data events, whereas its mean duration is longer in Fuenterrabia.
NASA Astrophysics Data System (ADS)
Becker, M. D.; Wang, Y.; Englehart, J.; Pennell, K. D.; Abriola, L. M.
2010-12-01
As manufactured nanomaterials become more prevalent in commercial and industrial applications, the development of mathematical models capable of predicting nanomaterial transport and retention in subsurface systems is crucial to assessing their fate and distribution in the environment. A systematic modeling approach based on a modification of clean-bed filtration theory was undertaken to elucidate mechanisms governing the transport and deposition behavior of quantum dots in saturated quartz sand as a function of grain size and flow velocity. The traditional deposition governing equation, which assumes irreversible attachment by a first-order rate (katt), was modified to include a maximum or limiting retention capacity (Smax) and first-order detachment of particles from the solid phase (kdet). Quantum dot mobility experiments were performed in columns packed with three size fractions of Ottawa sand (d50 = 125, 165, and 335 μm) at two different pore-water velocities (0.8 m/d and 7.6 m/d). The CdSe quantum dots in a CdZnS shell and polyacrylic acid coating were negatively charged (zeta potential measured ca. -35 mV) with a hydrodynamic diameter of approximately 30 nm. Fitted values of katt, Smax, and kdet were obtained for each transport and deposition experiment through the implementation of a nonlinear least-squares routine developed to fit the model to experimental breakthrough and retention data via multivariate optimization. Fitted attachment rates and retention capacities increased exponentially with decreasing grain size at both flow rates, while no discernable trend was apparent for the fitted detachment rates. Maximum retention capacity values were plotted against a normalized mass flux expression, which accounts for flow conditions and grain size. A power function fit to the data yielded a dependence that was consistent with a previous study undertaken with fullerene nanoparticles.
NASA Astrophysics Data System (ADS)
Thomas, Marlon Sheldon
Bacterial infections continue to be one of the major health risks in the United States. The common occurrence of such infection is one of the major contributors to the high cost of health care and significant patient mortality. The work presented in this thesis describes spectroscopic studies that will contribute to the development of a fluorescent assay that may allow the rapid identification of bacterial species. Herein, the optical interactions between six bacterial species and a series of thiacyanine dyes are investigated. The interactions between the dyes and the bacterial species are hypothesized to be species-specific. For this thesis, two Gram-negative strains, Escherichia coli (E. coli) TOP10 and Enterobacter aerogenes; two Gram-positive bacterial strains, Bacillus sphaericus and Bacillus subtilis; and two Bacillus endospores, B. globigii and B. thuringiensis, were used to test the proposed hypothesis. A series of three thiacyanine dyes---3,3'-diethylthiacyanine iodide (THIA), 3,3'-diethylthiacarbocyanine iodide (THC) and thiazole orange (THO)---were used as fluorescent probes. The basis of our spectroscopic study was to explore the bacterium-induced interactions of the bacterial cells with the individual thiacyanine dyes or with a mixture of the three dyes. Steady-state absorption spectroscopy revealed that the different bacterial species altered the absorption properties of the dyes. Mixed-dye solutions gave unique absorption patterns for each bacteria tested, with competitive binding observed between the bacteria and spectrophotometric probes (thiacyanine dyes). Emission spectroscopy recorded changes in the emission spectra of THIA following the introduction of bacterial cells. Experimental results revealed that the emission enhancement of the dyes resulted from increases in the emission quantum yield of the thiacyanine dyes upon binding to the bacteria cellular components. The recorded emission enhancement data were fitted to an exponential (mono-exponential or bi-exponential) function, and time constants were extracted by regressing on the experimental data. The addition of the TWEEN surfactants decreased the rate at which the dyes interacted with the bacterial cells, which typically resulted in larger time constants derived from an exponential fit. ANOVA analysis of the time constants confirmed that the values of the time constants clustered in a narrow range and were independent of dye concentration and weakly dependent on cell density.
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
NASA Astrophysics Data System (ADS)
Fontana, A.; Salimbeni, S.; Grazian, A.; Giallongo, E.; Pentericci, L.; Nonino, M.; Fontanot, F.; Menci, N.; Monaco, P.; Cristiani, S.; Vanzella, E.; de Santis, C.; Gallozzi, S.
2006-12-01
Aims.The goal of this work is to measure the evolution of the Galaxy Stellar Mass Function and of the resulting Stellar Mass Density up to redshift ≃4, in order to study the assembly of massive galaxies in the high redshift Universe. Methods: .We have used the GOODS-MUSIC catalog, containing 3000 Ks-selected galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, of which 27% have spectroscopic redshifts and the remaining fraction have accurate photometric redshifts. On this sample we have applied a standard fitting procedure to measure stellar masses. We compute the Galaxy Stellar Mass Function and the resulting Stellar Mass Density up to redshift ≃4, taking into proper account the biases and incompleteness effects. Results: .Within the well known trend of global decline of the Stellar Mass Density with redshift, we show that the decline of the more massive galaxies may be described by an exponential timescale of ≃6 Gyr up to z≃ 1.5, and proceeds much faster thereafter, with an exponential timescale of ≃0.6 Gyr. We also show that there is some evidence for a differential evolution of the Galaxy Stellar Mass Function, with low mass galaxies evolving faster than more massive ones up to z≃ 1{-}1.5 and that the Galaxy Stellar Mass Function remains remarkably flat (i.e. with a slope close to the local one) up to z≃ 1{-}1.3. Conclusions: .The observed behaviour of the Galaxy Stellar Mass Function is consistent with a scenario where about 50% of present-day massive galaxies formed at a vigorous rate in the epoch between redshift 4 and 1.5, followed by a milder evolution until the present-day epoch.
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Nocturnal Dynamics of Sleep-Wake Transitions in Patients With Narcolepsy.
Zhang, Xiaozhe; Kantelhardt, Jan W; Dong, Xiao Song; Krefting, Dagmar; Li, Jing; Yan, Han; Pillmann, Frank; Fietze, Ingo; Penzel, Thomas; Zhao, Long; Han, Fang
2017-02-01
We investigate how characteristics of sleep-wake dynamics in humans are modified by narcolepsy, a clinical condition that is supposed to destabilize sleep-wake regulation. Subjects with and without cataplexy are considered separately. Differences in sleep scoring habits as a possible confounder have been examined. Four groups of subjects are considered: narcolepsy patients from China with (n = 88) and without (n = 15) cataplexy, healthy controls from China (n = 110) and from Europe (n = 187, 2 nights each). After sleep-stage scoring and calculation of sleep characteristic parameters, the distributions of wake-episode durations and sleep-episode durations are determined for each group and fitted by power laws (exponent α) and by exponentials (decay time τ). We find that wake duration distributions are consistent with power laws for healthy subjects (China: α = 0.88, Europe: α = 1.02). Wake durations in all groups of narcolepsy patients, however, follow the exponential law (τ = 6.2-8.1 min). All sleep duration distributions are best fitted by exponentials on long time scales (τ = 34-82 min). We conclude that narcolepsy mainly alters the control of wake-episode durations but not sleep-episode durations, irrespective of cataplexy. Observed distributions of shortest wake and sleep durations suggest that differences in scoring habits regarding the scoring of short-term sleep stages may notably influence the fitting parameters but do not affect the main conclusion. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
A hybrid MD-kMC algorithm for folding proteins in explicit solvent.
Peter, Emanuel Karl; Shea, Joan-Emma
2014-04-14
We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.
NASA Astrophysics Data System (ADS)
Herbani, Y.; Nakamura, T.; Sato, S.
2017-04-01
This paper reports the synthesis of silver colloids by femtosecond laser ablation of ammonia-containing AgNO3 solution. Effect of ammonia concentration in solution on the production of Ag nanoparticles was discussed. It is found that ammonia rules out significantly to the formation of Ag nanoparticles at which no Ag nanoparticle were formed in the solution without ammonia. Using the solution with the optimum ratio of ammonia to Ag+ ions, we further investigate the growth process of Ag nanoparticle by monitoring the evolution of its absorption spectra at 402 nm as a function of irradiation time. The result showed that the growth process was fit to the simple exponential function, and confirmed that the addition of ammonia alone to the metal ion system can boost the particle production by femtosecond laser.
Using Differentials to Differentiate Trigonometric and Exponential Functions
ERIC Educational Resources Information Center
Dray, Tevian
2013-01-01
Starting from geometric definitions, we show how differentials can be used to differentiate trigonometric and exponential functions without limits, numerical estimates, solutions of differential equations, or integration.
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
Role of blockages in particle transport through homogeneous granular assemblies
NASA Astrophysics Data System (ADS)
Tejada, I. G.; Sibille, L.; Chareyre, B.
2016-09-01
This letter deals with the transport of particles through granular assemblies and, specifically, with the intermittent formation of blockages originated from collective and purely mechanical clogging of constrictions. We perform numerical experiments with a micro-hydromechanical model that is able to reproduce the complex interplay between the carrier fluid, the transported particles and the granular assembly. The probability distribution functions (PDFs) of the duration of blockages and displacements give the time scale on which the effect of blockages is erased and the advection-dispersion paradigm is valid. Our experiments show that these PDFs fit exponential laws, reinforcing the idea that the formation and destruction of blockages are homogeneous Poisson processes.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
SMM observation of a cosmic gamma-ray burst from 20 keV to 100 MeV
NASA Technical Reports Server (NTRS)
Share, G. H.; Matz, S. M.; Messina, D. C.; Nolan, P. L.; Chupp, E. L.
1986-01-01
The Solar Maximum Mission gamma-ray spectrometer has detected an intense gamma-ray burst that occurred on August 5, 1984. The burst originated from a source in the constellation Hydra and lasted about 45 s. Its integral fluence at 20 keV was 0.003 erg/sq cm. Spectral evolution similar to other bursts detected by SMM was observed. The overall shape of the spectrum from 20 keV to 100 MeV, on timescales as short as 2 s, is relatively constant. This shape can be fitted by the sum of an exponential-type function and a power law. There is no evidence for narrow or broadened emission lines.
Photoluminescence study of MBE grown InGaN with intentional indium segregation
NASA Astrophysics Data System (ADS)
Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan
2005-05-01
Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
2002-01-01
The life prediction analysis based on an exponential crack velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress-rate ("dynamic fatigue") and preload testing at ambient and elevated temperatures. The data fit to the strength versus In (stress rate) relation was found to be very reasonable for most of the materials. It was also found that preloading technique was equally applicable for the case of slow crack growth (SCG) parameter n > 30. The major limitation in the exponential crack velocity formulation, however, was that an inert strength of a material must be known priori to evaluate the important SCG parameter n, a significant drawback as compared to the conventional power-law crack velocity formulation.
ERIC Educational Resources Information Center
Syed, M. Qasim; Lovatt, Ian
2014-01-01
This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…
Measurements of strain at plate boundaries using space based geodetic techniques
NASA Technical Reports Server (NTRS)
Robaudo, Stefano; Harrison, Christopher G. A.
1993-01-01
We have used the space based geodetic techniques of Satellite Laser Ranging (SLR) and VLBI to study strain along subduction and transform plate boundaries and have interpreted the results using a simple elastic dislocation model. Six stations located behind island arcs were analyzed as representative of subduction zones while 13 sites located on either side of the San Andreas fault were used for the transcurrent zones. The length deformation scale was then calculated for both tectonic margins by fitting the relative strain to an exponentially decreasing function of distance from the plate boundary. Results show that space-based data for the transcurrent boundary along the San Andreas fault help to define better the deformation length scale in the area while fitting nicely the elastic half-space earth model. For subduction type bonndaries the analysis indicates that there is no single scale length which uniquely describes the deformation. This is mainly due to the difference in subduction characteristics for the different areas.
Compact exponential product formulas and operator functional derivative
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Photovoltaics: Reviewing the European Feed-in-Tariffs and Changing PV Efficiencies and Costs
Zhang, H. L.; Van Gerven, T.; Baeyens, J.; Degrève, J.
2014-01-01
Feed-in-Tariff (FiT) mechanisms have been important in boosting renewable energy, by providing a long-term guaranteed subsidy of the kWh-price, thus mitigating investment risks and enhancing the contribution of sustainable electricity. By ongoing PV development, the contribution of solar power increases exponentially. Within this significant potential, it is important for investors, operators, and scientists alike to provide answers to different questions related to subsidies, PV efficiencies and costs. The present paper therefore (i) briefly reviews the mechanisms, advantages, and evolution of FiT; (ii) describes the developments of PV, (iii) applies a comprehensive literature-based model for the solar irradiation to predict the PV solar energy potential in some target European countries, whilst comparing output predictions with the monthly measured electricity generation of a 57 m² photovoltaic system (Belgium); and finally (iv) predicts the levelized cost of energy (LCOE) in terms of investment and efficiency, providing LCOE values between 0.149 and 0.313 €/kWh, as function of the overall process efficiency and cost. The findings clearly demonstrate the potential of PV energy in Europe, where FiT can be considerably reduced or even be eliminated in the near future. PMID:24959614
The PHAT and SPLASH Surveys: Rigorous Structural Decomposition of the Andromeda Galaxy
NASA Astrophysics Data System (ADS)
Dorman, Claire; Guhathakurta, P.; Widrow, L.; Foreman-Mackey, D.; Seth, A.; Dalcanton, J.; Gilbert, K.; Lang, D.; Williams, B. F.; SPLASH Team; PHAT Team
2013-01-01
Traditional surface brightness profile (SBP) based structural decompositions of late-type galaxies into Sersic bulge, exponential disk, and power-law halo are often degenerate in the best-fit profiles. The Andromeda galaxy (M31) is the only large spiral close enough that the relative contributions of the subcomponents can be further constrained via their distinct signatures in resolved stellar population surveys. We make use of two such surveys. The SPLASH program has used the Keck/DEIMOS multiobject spectrograph to measure radial velocities of over 10,000 individual red giant branch stars in the inner 20kpc of M31. The PHAT survey, an ongoing Hubble Space Telescope Multicycle Treasury program, has so far obtained six-filter photometry of over 90 million stars in the same region. We use an MCMC algorithm to simultaneously fit a simple bulge/disk/halo structural model to the SBP, the disk fraction as measured from kinematics, and the PHAT luminosity function. We find that the additional constraints favor a larger bulge than expected from a pure SBP fit. Comparison to galaxy formation models will constrain the formation histories of large spiral galaxies such as the Milky Way and Andromeda.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Richter, Philipp; Toledano-Ayala, Manuel
2015-01-01
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996
R-Function Relationships for Application in the Fractional Calculus
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Hartley, Tom T.
2000-01-01
The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, e(t), and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, el, in terms of the R-function are developed. Also, some approximations for the R-function are developed.
R-function relationships for application in the fractional calculus.
Lorenzo, Carl F; Hartley, Tom T
2008-01-01
The F-function, and its generalization the R-function, are of fundamental importance in the fractional calculus. It has been shown that the solution of the fundamental linear fractional differential equation may be expressed in terms of these functions. These functions serve as generalizations of the exponential function in the solution of fractional differential equations. Because of this central role in the fractional calculus, this paper explores various intrarelationships of the R-function, which will be useful in further analysis. Relationships of the R-function to the common exponential function, et, and its fractional derivatives are shown. From the relationships developed, some important approximations are observed. Further, the inverse relationships of the exponential function, et, in terms of the R-function are developed. Also, some approximations for the R-function are developed.
[Comparison among three translucency parameters].
Fang, Xiong; Hui, Xia
2017-06-01
This study aims to compare the three commonly used translucency parameters in prosthodontics: transmittance (T), contrast ratio (CR), and translucency parameter (TP). Six platelet specimens were composed of Vita enamel and dental porcelain. The initial thickness was 1.2 mm. The specimens were gradually ground to 1.0, 0.8, 0.6, 0.4, and 0.2 mm. T, color parameters, and reflection were measured by a spectrocolorimeter for each corresponding thickness. T, CR and TP were calculated and compared. TP increased, whereas CR decreased, with decreasing thickness. Moreover, T increased with decreasing thickness, and exponential relationships were found. Two-way ANOVA showed statistical significance between T and thickness, except between T and the 1.2 mm and 1.0 mm enamel porcelain groups. No difference was found among the coefficient variations (CV) of T, CR and TP. Curve fitting indicated the existence of exponential relationships between T and CR and between T and TP. The values for goodness of fit with statistical significance were 0.951 and 0.939, respectively (P<0.05). Under the experimental conditions, T, TP and CR achieved the same CV. T and TP, as well as T and CR, were found with exponential relationships. The value of CR and TP could not represent the translucency precisely, especially when comparing the changing ratios.
Long-term radio and X-ray evolution of the tidal disruption event ASASSN-14li
NASA Astrophysics Data System (ADS)
Bright, J. S.; Fender, R. P.; Motta, S. E.; Mooley, K.; Perrott, Y. C.; van Velzen, S.; Carey, S.; Hickish, J.; Razavi-Ghods, N.; Titterington, D.; Scott, P.; Grainge, K.; Scaife, A.; Cantwell, T.; Rumsey, C.
2018-04-01
We report on late time radio and X-ray observations of the tidal disruption event candidate ASASSN-14li, covering the first 1000 d of the decay phase. For the first ˜200 d the radio and X-ray emission fade in concert. This phase is better fitted by an exponential decay at X-ray wavelengths, while the radio emission is well described by either an exponential or the canonical t-5/3 decay assumed for tidal disruption events. The correlation between radio and X-ray emission during this period can be fitted as L_R∝ L_X^{1.9± 0.2}. After 400 d the radio emission at 15.5 GHz has reached a plateau level of 244 ± 8 μJy which it maintains for at least the next 600 d, while the X-ray emission continues to fade exponentially. This steady level of radio emission is likely due to relic radio lobes from the weak AGN-like activity implied by historical radio observations. We note that while most existing models are based upon the evolution of ejecta which are decoupled from the central black hole, the radio-X-ray correlation during the declining phase is also consistent with core-jet emission coupled to a radiatively efficient accretion flow.
Fenton, Tanis R; Anderson, Diane; Groh-Wargo, Sharon; Hoyos, Angela; Ehrenkranz, Richard A; Senterre, Thibault
2018-05-01
To examine how well growth velocity recommendations for preterm infants fit with current growth references: Fenton 2013, Olsen 2010, INTERGROWTH 2015, and the World Health Organization Growth Standard 2006. The Average (2-point), Exponential (2-point), Early (1-point) method weight-gains were calculated for 1,4,8,12, and 16-week time-periods. Growth references' weekly velocities (g/kg/d, gram/day and cm/week) were illustrated graphically with frequently-quoted 15 g/kg/d, 10-30 grams/day and 1 cm/week rates superimposed. The 15 g/kg/d and 1 cm/week growth velocity rates were calculated from 24-50 weeks, superimposed on the Fenton and Olsen preterm growth charts. The Average and Exponential g/kg/d estimates showed close agreement for all ages (range 5.0-18.9 g/kg/d), while the Early method yielded values as high as 41 g/kg/d. All 3 preterm growth references were similar to 15 g/kg/d rate at 34 weeks, but rates were higher prior and lower at older ages. For gram/day, the growth references changed from 10 to 30 grams/day for 24-33 weeks. Head growth rates generally fit the 1 cm/week velocity for 23-30 weeks, and length growth rates fit for 37-40 weeks. The calculated g/kg/d curves deviated from the growth charts, first downward, then steeply crossed the median curves near term. Human growth is not constant through gestation and early infancy. The frequently-quoted 15 g/kg/d, 10-30 gram/day and 1 cm/week only fit current growth references for limited time periods. Rates of 15-20 g/kg/d (calculated using average or exponential methods) are a reasonable goal for infants 23-36 weeks, but not beyond. Copyright © 2017 Elsevier Inc. All rights reserved.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Using simple environmental variables to estimate below-ground productivity in grasslands
Gill, R.A.; Kelly, R.H.; Parton, W.J.; Day, K.A.; Jackson, R.B.; Morgan, J.A.; Scurlock, J.M.O.; Tieszen, L.L.; Castle, J.V.; Ojima, D.S.; Zhang, X.S.
2002-01-01
In many temperate and annual grasslands, above-ground net primary productivity (NPP) can be estimated by measuring peak above-ground biomass. Estimates of below-ground net primary productivity and, consequently, total net primary productivity, are more difficult. We addressed one of the three main objectives of the Global Primary Productivity Data Initiative for grassland systems to develop simple models or algorithms to estimate missing components of total system NPP. Any estimate of below-ground NPP (BNPP) requires an accounting of total root biomass, the percentage of living biomass and annual turnover of live roots. We derived a relationship using above-ground peak biomass and mean annual temperature as predictors of below-ground biomass (r2 = 0.54; P = 0.01). The percentage of live material was 0.6, based on published values. We used three different functions to describe root turnover: constant, a direct function of above-ground biomass, or as a positive exponential relationship with mean annual temperature. We tested the various models against a large database of global grassland NPP and the constant turnover and direct function models were approximately equally descriptive (r2 = 0.31 and 0.37), while the exponential function had a stronger correlation with the measured values (r2 = 0.40) and had a better fit than the other two models at the productive end of the BNPP gradient. When applied to extensive data we assembled from two grassland sites with reliable estimates of total NPP, the direct function was most effective, especially at lower productivity sites. We provide some caveats for its use in systems that lie at the extremes of the grassland gradient and stress that there are large uncertainties associated with measured and modelled estimates of BNPP.
What Happens to bone health during and after spaceflight?
NASA Technical Reports Server (NTRS)
Sibonga, Jean D.; Evans, Harlan J.; Spector, Elisabeth R.; Maddocks, Mary J.; Smith, Scott A.; Shackelford, Linda C.; LeBlanc, Adrian D.
2006-01-01
Weightless conditions of space flight accelerate bone loss. There are no reports to date that address whether the bone that is lost during spaceflight could ever be recovered. Spaceinduced bone loss in astronauts is evaluated at the Johnson Space Center (JSC) by measurement of bone mineral density (BMD) by Dual-energy x-ray absorptiometry (DXA) scans. Astronauts are routinely scanned preflight and at various time points postflight (greater than or equal to Return+2 days). Two sets of BMD data were used to model spaceflight-induced loss and skeletal recovery in crewmembers following long-duration spaceflight missions (4-6 months). Group I was from astronauts (n=7) who were systematically scanned at multiple time points during the postflight period as part of a research protocol to investigate skeletal recovery. Group II came from a total of 49 sets of preflight and postflight data obtained by different protocols. These data were from 39 different crewmembers some of whom served on multiple flights. Changes in BMD (between pre- and postflight BMD) were plotted as a function of time (days-after-landing); plotted data were fitted to an exponential equation which enabled estimations of i) BMD change at day 0 after landing and ii) the number of days by which 50% of the lost bone is recovered (half-life). These fits were performed for BMD of the lumbar spine, trochanter, pelvis, femoral neck and calcaneus. There was consistency between the models for BMD recovery. Based upon the exponential model of BMD restoration, recovery following long-duration missions appears to be substantially complete in crewmembers within 36 months following return to Earth.
Hertäg, Loreen; Durstewitz, Daniel; Brunel, Nicolas
2014-01-01
Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...
2017-01-05
Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less
Observed and Normative Discount Functions in Addiction and other Diseases
Cruz Rambaud, Salvador; Muñoz Torrecillas, María J.; Takahashi, Taiki
2017-01-01
The aim of this paper is to find a suitable discount function able to describe the progression of a certain addiction or disease under treatment as a discounting process. In effect, a certain indicator related to a disease decays over time in a manner which is mathematically similar to the way in which discounting has been modeled. We analyze the discount functions observed in experiments which study addictive and other problematic behaviors as well as some alternative hyperbola-like discount functions in order to fit the patience exhibited by the subject after receiving the treatment. Additionally, it has been experimentally found that people with addiction display high rates of discount (impatience) and preference reversals (dynamic inconsistency). This excessive discounting must be correctly modeled by a suitable discount function, otherwise, it can become a trans-disease process underlying addiction and other disorders. The (generalized) exponentiated hyperbolic discount function is proposed to describe the progression of a disease with respect to the treatment, since it maintains the property of inconsistency by exhibiting a decreasing discount rate after an initial period in which the opposite occurs. PMID:28706486
Observed and Normative Discount Functions in Addiction and other Diseases.
Cruz Rambaud, Salvador; Muñoz Torrecillas, María J; Takahashi, Taiki
2017-01-01
The aim of this paper is to find a suitable discount function able to describe the progression of a certain addiction or disease under treatment as a discounting process. In effect, a certain indicator related to a disease decays over time in a manner which is mathematically similar to the way in which discounting has been modeled. We analyze the discount functions observed in experiments which study addictive and other problematic behaviors as well as some alternative hyperbola-like discount functions in order to fit the patience exhibited by the subject after receiving the treatment. Additionally, it has been experimentally found that people with addiction display high rates of discount (impatience) and preference reversals (dynamic inconsistency). This excessive discounting must be correctly modeled by a suitable discount function, otherwise, it can become a trans-disease process underlying addiction and other disorders. The (generalized) exponentiated hyperbolic discount function is proposed to describe the progression of a disease with respect to the treatment, since it maintains the property of inconsistency by exhibiting a decreasing discount rate after an initial period in which the opposite occurs.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
Non-extensive quantum statistics with particle-hole symmetry
NASA Astrophysics Data System (ADS)
Biró, T. S.; Shen, K. M.; Zhang, B. W.
2015-06-01
Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.
Exponential decline of deep-sea ecosystem functioning linked to benthic biodiversity loss.
Danovaro, Roberto; Gambi, Cristina; Dell'Anno, Antonio; Corinaldesi, Cinzia; Fraschetti, Simonetta; Vanreusel, Ann; Vincx, Magda; Gooday, Andrew J
2008-01-08
Recent investigations suggest that biodiversity loss might impair the functioning and sustainability of ecosystems. Although deep-sea ecosystems are the most extensive on Earth, represent the largest reservoir of biomass, and host a large proportion of undiscovered biodiversity, the data needed to evaluate the consequences of biodiversity loss on the ocean floor are completely lacking. Here, we present a global-scale study based on 116 deep-sea sites that relates benthic biodiversity to several independent indicators of ecosystem functioning and efficiency. We show that deep-sea ecosystem functioning is exponentially related to deep-sea biodiversity and that ecosystem efficiency is also exponentially linked to functional biodiversity. These results suggest that a higher biodiversity supports higher rates of ecosystem processes and an increased efficiency with which these processes are performed. The exponential relationships presented here, being consistent across a wide range of deep-sea ecosystems, suggest that mutually positive functional interactions (ecological facilitation) can be common in the largest biome of our biosphere. Our results suggest that a biodiversity loss in deep-sea ecosystems might be associated with exponential reductions of their functions. Because the deep sea plays a key role in ecological and biogeochemical processes at a global scale, this study provides scientific evidence that the conservation of deep-sea biodiversity is a priority for a sustainable functioning of the worlds' oceans.
Compact exponential product formulas and operator functional derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, M.
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}
The Exponential Function--Part VIII
ERIC Educational Resources Information Center
Bartlett, Albert A.
1978-01-01
Presents part eight of a continuing series on the exponential function in which, given the current population of the Earth and assuming a constant growth rate of 1.9 percent backward looks at world population are made. (SL)
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
ERIC Educational Resources Information Center
Aieta, Joseph F.
1987-01-01
This article illustrates how questions from elementary finance can serve as motivation for studying high order powers, roots, and exponential functions using Logo procedures. A second discussion addresses a relatively unknown algorithm for the trigonometric exponential and hyperbolic functions. (PK)
An analytic method to account for drag in the Vinti Satellite theory
NASA Technical Reports Server (NTRS)
Watson, J. S.; Mistretta, G. D.; Bonavito, N. L.
1974-01-01
To retain separability in the Vinti theory of earth satellite motion when a nonconservative force such as air drag is considered, a set of variational equations for the orbital elements are introduced, and expressed as functions of the transverse, radial, and normal components of the nonconservative forces acting on the system. In this approach, the Hamiltonian is preserved in form, and remains the total energy, but the initial or boundary conditions and hence the Jacobi constants of the motion advance with time through the variational equations. In particular, the atmospheric density profile is written as a fitted exponential function of the eccentric anomaly, which adheres to tabular data at all altitudes and simultaneously reduced the variational equations to indefinite integrals with closed form evaluations. The values of the limits for any arbitrary time interval are obtained from the Vinti program.
Transition from lognormal to χ2-superstatistics for financial time series
NASA Astrophysics Data System (ADS)
Xu, Dan; Beck, Christian
2016-07-01
Share price returns on different time scales can be well modelled by a superstatistical dynamics. Here we provide an investigation which type of superstatistics is most suitable to properly describe share price dynamics on various time scales. It is shown that while χ2-superstatistics works well on a time scale of days, on a much smaller time scale of minutes the price changes are better described by lognormal superstatistics. The system dynamics thus exhibits a transition from lognormal to χ2 superstatistics as a function of time scale. We discuss a more general model interpolating between both statistics which fits the observed data very well. We also present results on correlation functions of the extracted superstatistical volatility parameter, which exhibits exponential decay for returns on large time scales, whereas for returns on small time scales there are long-range correlations and power-law decay.
Effect of electric field on adsorption of formaldehyde by β-cellobiose in micro-scale
NASA Astrophysics Data System (ADS)
Xu, Bo; Chen, Zhenqian
2018-05-01
To provide a microcosmic theoretical support for the reduction of formaldehyde in building material by the effect of electric fields, the adsorption between formaldehyde molecule and β-cellobiose was studied by density function theory (DFT). Details of geometric structures, molecule bonds and adsorption energy were discussed respectively. The obtained results indicated the energy of formaldehyde molecule decreased while the energy of β-cellobiose increased with greater electric intensity. In addition, the adsorption energy between formaldehyde molecule and β-cellobiose was greatly influenced by external electric field. The adsorption energy reduced gradually with greater electric intensity, and the changing curve of adsorption energy could be fitted as an exponential function, verified by the experiment. The results of this study confirmed the external electric field would be a good strategy for decreasing formaldehyde within building materials in the microcosmic view.
LFsGRB: Binary neutron star merger rate via the luminosity function of short gamma-ray bursts
NASA Astrophysics Data System (ADS)
Paul, Debdutta
2018-04-01
LFsGRB models the luminosity function (LF) of short Gamma Ray Bursts (sGRBs) by using the available catalog data of all short GRBs (sGRBs) detected till 2017 October, estimating the luminosities via pseudo-redshifts obtained from the Yonetoku correlation, and then assuming a standard delay distribution between the cosmic star formation rate and the production rate of their progenitors. The data are fit well both by exponential cutoff powerlaw and broken powerlaw models. Using the derived parameters of these models along with conservative values in the jet opening angles seen from afterglow observations, the true rate of short GRBs is derived. Assuming a short GRB is produced from each binary neutron star merger (BNSM), the rate of gravitational wave (GW) detections from these mergers are derived for the past, present and future configurations of the GW detector networks.
Tensile properties of craniofacial tendons in the mature and aged zebrafish
Shah, Rishita R.; Nerurkar, Nandan L.; Wang, Calvin; Galloway, Jenna L.
2015-01-01
The zebrafish Danio rerio is a powerful model for the study of development, regenerative biology, and human disease. However, the analysis of load-bearing tissues such as tendons and ligaments has been limited in this system. This is largely due to technical limitations that preclude accurate measurement of their mechanical properties. Here, we present a custom tensile testing system that applies nano-Newton scale forces to zebrafish tendons as small as 1 mm in length. Tendon properties were remarkably similar to mammalian tendons, including stress-strain nonlinearity and a linear modulus (515±152 MPa) that aligned closely with mammalian data. Additionally, a simple exponential constitutive law used to describe tendon mechanics was successfully fit to zebrafish tendons; the associated material constants agreed with literature values for mammalian tendons. Finally, mature and aged zebrafish comparisons revealed a significant decline in mechanical function with age. Based on the exponential constitutive model, age related changes were primarily caused by a reduction in nonlinearity (e.g. changes in collagen crimp or fiber recruitment). These findings demonstrate the utility of zebrafish as a model to study tendon biomechanics in health and disease. Moreover, these findings suggest that tendon mechanical behavior is highly conserved across vertebrates. PMID:25665155
Reciprocal capacitance transients?
NASA Astrophysics Data System (ADS)
Gfroerer, Tim; Simov, Peter; Wanlass, Mark
2007-03-01
When the reverse bias across a semiconductor diode is changed, charge carriers move to accommodate the appropriate depletion thickness, producing a simultaneous change in the device capacitance. Transient capacitance measurements can reveal inhibited carrier motion due to trapping, where the depth of the trap can be evaluated using the temperature-dependent escape rate. However, when we employ this technique on a GaAs0.72P0.28 n+/p diode (which is a candidate for incorporation in multi-junction solar cells), we observe a highly non-exponential response under a broad range of experimental conditions. Double exponential functions give good fits, but lead to non-physical results. The deduced rates depend on the observation time window and fast and slow rates, which presumably correspond to deep and shallow levels, have identical activation energies. Meanwhile, we have discovered a universal linear relationship between the inverse of the capacitance and time. An Arrhenius plot of the slope of the reciprocal of the transient yields an activation energy of approximately 0.4 eV, independent of the observation window and other experimental conditions. The reciprocal behavior leads us to hypothesize that hopping, rather than escape into high-mobility bands, may govern the transport of trapped holes in this system.
Photoacoustic signal attenuation analysis for the assessment of thin layers thickness in paintings
NASA Astrophysics Data System (ADS)
Tserevelakis, George J.; Dal Fovo, Alice; Melessanaki, Krystalia; Fontana, Raffaella; Zacharakis, Giannis
2018-03-01
This study introduces a novel method for the thickness estimation of thin paint layers in works of art, based on photoacoustic signal attenuation analysis (PAcSAA). Ad hoc designed samples with acrylic paint layers (Primary Red Magenta, Cadmium Yellow, Ultramarine Blue) of various thicknesses on glass substrates were realized for the specific application. After characterization by Optical Coherence Tomography imaging, samples were irradiated at the back side using low energy nanosecond laser pulses of 532 nm wavelength. Photoacoustic waves undergo a frequency-dependent exponential attenuation through the paint layer, before being detected by a broadband ultrasonic transducer. Frequency analysis of the recorded time-domain signals allows for the estimation of the average transmitted frequency function, which shows an exponential decay with the layer thickness. Ultrasonic attenuation models were obtained for each pigment and used to fit the data acquired on an inhomogeneous painted mock-up simulating a real canvas painting. Thickness evaluation through PAcSAA resulted in excellent agreement with cross-section analysis with a conventional brightfield microscope. The results of the current study demonstrate the potential of the proposed PAcSAA method for the non-destructive stratigraphic analysis of painted artworks.
Schomer, Paul; Mestre, Vincent; Fidell, Sanford; Berry, Bernard; Gjestland, Truls; Vallet, Michel; Reid, Timothy
2012-04-01
Fidell et al. [(2011), J. Acoust. Soc. Am. 130(2), 791-806] have shown (1) that the rate of growth of annoyance with noise exposure reported in attitudinal surveys of the annoyance of aircraft noise closely resembles the exponential rate of change of loudness with sound level, and (2) that the proportion of a community highly annoyed and the variability in annoyance prevalence rates in communities are well accounted for by a simple model with a single free parameter: a community tolerance level (abbreviated CTL, and represented symbolically in mathematical expressions as L(ct)), expressed in units of DNL. The current study applies the same modeling approach to predicting the prevalence of annoyance of road traffic and rail noise. The prevalence of noise-induced annoyance of all forms of transportation noise is well accounted for by a simple, loudness-like exponential function with community-specific offsets. The model fits all of the road traffic findings well, but the prevalence of annoyance due to rail noise is more accurately predicted separately for interviewing sites with and without high levels of vibration and/or rattle.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
VO2 Off Transient Kinetics in Extreme Intensity Swimming.
Sousa, Ana; Figueiredo, Pedro; Keskinen, Kari L; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, João P; Fernandes, Ricardo J
2011-01-01
Inconsistencies about dynamic asymmetry between the on- and off- transient responses in oxygen uptake are found in the literature. Therefore, the purpose of this study was to characterize the oxygen uptake off-transient kinetics during a maximal 200-m front crawl effort, as examining the degree to which the on/off regularity of the oxygen uptake kinetics response was preserved. Eight high level male swimmers performed a 200-m front crawl at maximal speed during which oxygen uptake was directly measured through breath-by-breath oxymetry (averaged every 5 s). This apparatus was connected to the swimmer by a low hydrodynamic resistance respiratory snorkel and valve system. The on- and off-transient phases were symmetrical in shape (mirror image) once they were adequately fitted by a single-exponential regression models, and no slow component for the oxygen uptake response was developed. Mean (± SD) peak oxygen uptake was 69.0 (± 6.3) mL·kg(-1)·min(-1), significantly correlated with time constant of the off- transient period (r = 0.76, p < 0.05) but not with any of the other oxygen off-transient kinetic parameters studied. A direct relationship between time constant of the off-transient period and mean swimming speed of the 200-m (r = 0.77, p < 0.05), and with the amplitude of the fast component of the effort period (r = 0.72, p < 0.05) were observed. The mean amplitude and time constant of the off-transient period values were significantly greater than the respective on- transient. In conclusion, although an asymmetry between the on- and off kinetic parameters was verified, both the 200-m effort and the respectively recovery period were better characterized by a single exponential regression model. Key pointsThe VO2 slow component was not observed in the recovery period of swimming extreme efforts;The on and off transient periods were better fitted by a single exponential function, and so, these effort and recovery periods of swimming extreme efforts are symmetrical;The rate of VO2 decline during the recovery period may be due to not only the magnitude of oxygen debt but also the VO2peak obtained during the effort period.
An improved cyan fluorescent protein variant useful for FRET.
Rizzo, Mark A; Springer, Gerald H; Granada, Butch; Piston, David W
2004-04-01
Many genetically encoded biosensors use Förster resonance energy transfer (FRET) between fluorescent proteins to report biochemical phenomena in living cells. Most commonly, the enhanced cyan fluorescent protein (ECFP) is used as the donor fluorophore, coupled with one of several yellow fluorescent protein (YFP) variants as the acceptor. ECFP is used despite several spectroscopic disadvantages, namely a low quantum yield, a low extinction coefficient and a fluorescence lifetime that is best fit by a double exponential. To improve the characteristics of ECFP for FRET measurements, we used a site-directed mutagenesis approach to overcome these disadvantages. The resulting variant, which we named Cerulean (ECFP/S72A/Y145A/H148D), has a greatly improved quantum yield, a higher extinction coefficient and a fluorescence lifetime that is best fit by a single exponential. Cerulean is 2.5-fold brighter than ECFP and replacement of ECFP with Cerulean substantially improves the signal-to-noise ratio of a FRET-based sensor for glucokinase activation.
The topology of large Open Connectome networks for the human brain.
Gastner, Michael T; Ódor, Géza
2016-06-07
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
The topology of large Open Connectome networks for the human brain
NASA Astrophysics Data System (ADS)
Gastner, Michael T.; Ódor, Géza
2016-06-01
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
A multigrid solver for the semiconductor equations
NASA Technical Reports Server (NTRS)
Bachmann, Bernhard
1993-01-01
We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.
Analytical model for release calculations in solid thin-foils ISOL targets
NASA Astrophysics Data System (ADS)
Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.
2016-10-01
A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.
Multiple relaxations of the cluster surface diffusion in a homoepitaxial SrTiO3 layer
NASA Astrophysics Data System (ADS)
Woo, Chang-Su; Chu, Kanghyun; Song, Jong-Hyun; Yang, Chan-Ho
2018-03-01
We examine the surface diffusion process of adatomic clusters on a (001)-oriented SrTiO3 single crystal using reflection high energy electron diffraction (RHEED). We find that the recovery curve of the RHEED intensity acquired after a homoepitaxial half-layer growth can be accurately fit into a double exponential function, indicating the existence of two dominant relaxation mechanisms. The characteristic relaxation times at selected growth temperatures are investigated to determine the diffusion activation barriers of 0.67 eV and 0.91 eV, respectively. The Monte Carlo simulation of the cluster hopping model suggests that the decrease in the number of dimeric and trimeric clusters during surface diffusion is the origin of the observed relaxation phenomena.
Dynamics and cluster formation in charged and uncharged Ficoll70 solutions
NASA Astrophysics Data System (ADS)
Palit, Swomitra; Yethiraj, Anand
2017-08-01
We apply pulsed-field-gradient NMR (PFG NMR) technique to measure the translational diffusion for both uncharged and charged polysaccharide (Ficoll70) in water. Analysis of the data indicates that the NMR signal attenuation above a certain packing fraction can be adequately fitted with a bi-exponential function. The self-diffusion measurements also show that the Ficoll70, an often-used compact, spherical polysucrose molecule, is itself nonideal, exhibiting signs of both softness and attractive interactions in the form of a stable suspension consisting of monomers and clusters. Further, we can quantify the fraction of monomers and clusters. This work strengthens the picture of the existence of a bound water layer within and around a porous Ficoll70 particle.
Adaptive cornea modeling from keratometric data.
Martínez-Finkelshtein, Andrei; López, Darío Ramos; Castro, Gracia M; Alió, Jorge L
2011-07-01
To introduce an iterative, multiscale procedure that allows for better reconstruction of the shape of the anterior surface of the cornea from altimetric data collected by a corneal topographer. The report describes, first, an adaptive, multiscale mathematical algorithm for the parsimonious fit of the corneal surface data that adapts the number of functions used in the reconstruction to the conditions of each cornea. The method also implements a dynamic selection of the parameters and the management of noise. Then, several numerical experiments are performed, comparing it with the results obtained by the standard Zernike-based procedure. The numerical experiments showed that the algorithm exhibits steady exponential error decay, independent of the level of aberration of the cornea. The complexity of each anisotropic Gaussian-basis function in the functional representation is the same, but the parameters vary to fit the current scale. This scale is determined only by the residual errors and not by the number of the iteration. Finally, the position and clustering of the centers, as well as the size of the shape parameters, provides additional spatial information about the regions of higher irregularity. The methodology can be used for the real-time reconstruction of both altimetric data and corneal power maps from the data collected by keratoscopes, such as the Placido ring-based topographers, that will be decisive in early detection of corneal diseases such as keratoconus.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
NASA Technical Reports Server (NTRS)
Jewett, M. E.; Dijk, D. J.; Kronauer, R. E.; Dinges, D. F.
1999-01-01
Although it has been well documented that sleep is required for human performance and alertness to recover from low levels after prolonged periods of wakefulness, it remains unclear whether they increase in a linear or asymptotic manner during sleep. It has been postulated that there is a relation between the rate of improvement in neurobehavioral functioning and rate of decline of slow-wave sleep and/or slow-wave activity (SWS/SWA) during sleep, but this has not been verified. Thus, a cross-study comparison was conducted in which dose-response curves (DRCs) were constructed for Stanford Sleepiness Scale (SSS) and Psychomotor Vigilance Task (PVT) tests taken at 1000 hours by subjects who had been allowed to sleep 0 hours, 2 hours, 5 hours or 8 hours the previous night. We found that the DRCs to each PVT metric improved in a saturating exponential manner, with recovery rates that were similar [time constant (T) approximately 2.14 hours] for all the metrics. This recovery rate was slightly faster than, though not statistically significantly different from, the reported rate of SWS/SWA decline (T approximately 2.7 hours). The DRC to the SSS improved much more slowly than psychomotor vigilance, so that it could be fit equally well by a linear function (slope = -0.26) or a saturating exponential function (T = 9.09 hours). We conclude that although SWS/SWA, subjective alertness, and a wide variety of psychomotor vigilance metrics may all change asymptotically during sleep, it remains to be determined whether the underlying physiologic processes governing their expression are different.
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
ERIC Educational Resources Information Center
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Vibronic relaxation dynamics of o-dichlorobenzene in its lowest excited singlet state
NASA Astrophysics Data System (ADS)
Liu, Benkang; Zhao, Haiyan; Lin, Xiang; Li, Xinxin; Gao, Mengmeng; Wang, Li; Wang, Wei
2018-01-01
Vibronic dynamics of o-dichlorobenzene in its lowest excited singlet state, S1, is investigated in real time by using femtosecond pump-probe method, combined with time-of-flight mass spectroscopy and photoelectron velocity mapping technique. Relaxation processes for the excitation in the range of 276-252 nm can be fitted by single exponential decay model, while in the case of wavelength shorter than 252 nm two-exponential decay model must be adopted for simulating transient profiles. Lifetime constants of the vibrationally excited S1 states change from 651 ± 10 ps for 276 nm excitation to 61 ± 1 ps for 242 nm excitation. Both the internal conversion from the S1 to the highly vibrationally excited ground state S0 and the intersystem crossing from the S1 to the triplet state are supposed to play important roles in de-excitation processes. Exponential fitting of the de-excitation rates on the excitation energy implies such de-excitation process starts from the highly vibrationally excited S0 state, which is validated, by probing the relaxation following photoexcitation at 281 nm, below the S1 origin. Time-dependent photoelectron kinetic energy distributions have been obtained experimentally. As the excitation wavelength changes from 276 nm to 242 nm, different cationic vibronic vibrations can be populated, determined by the Franck-Condon factors between the large geometry distorted excited singlet states and final cationic states.
Very-High-Energy γ-Ray Observations of the Blazar 1ES 2344+514 with VERITAS
NASA Astrophysics Data System (ADS)
Allen, C.; Archambault, S.; Archer, A.; Benbow, W.; Bird, R.; Bourbeau, E.; Brose, R.; Buchovecky, M.; Buckley, J. H.; Bugaev, V.; Cardenzana, J. V.; Cerruti, M.; Chen, X.; Christiansen, J. L.; Connolly, M. P.; Cui, W.; Daniel, M. K.; Eisch, J. D.; Falcone, A.; Feng, Q.; Fernandez-Alonso, M.; Finley, J. P.; Fleischhack, H.; Flinders, A.; Fortson, L.; Furniss, A.; Gillanders, G. H.; Griffin, S.; Grube, J.; Hütten, M.; Håkansson, N.; Hanna, D.; Hervet, O.; Holder, J.; Hughes, G.; Humensky, T. B.; Johnson, C. A.; Kaaret, P.; Kar, P.; Kelley-Hoskins, N.; Kertzman, M.; Kieda, D.; Krause, M.; Krennrich, F.; Kumar, S.; Lang, M. J.; Maier, G.; McArthur, S.; McCann, A.; Meagher, K.; Moriarty, P.; Mukherjee, R.; Nguyen, T.; Nieto, D.; O'Brien, S.; de Bhróithe, A. O'Faoláin; Ong, R. A.; Otte, A. N.; Park, N.; Petrashyk, A.; Pichel, A.; Pohl, M.; Popkow, A.; Pueschel, E.; Quinn, J.; Ragan, K.; Reynolds, P. T.; Richards, G. T.; Roache, E.; Rovero, A. C.; Rulten, C.; Sadeh, I.; Santander, M.; Sembroski, G. H.; Shahinyan, K.; Telezhinsky, I.; Tucci, J. V.; Tyler, J.; Wakely, S. P.; Weinstein, A.; Wilhelm, A.; Williams, D. A.
2017-10-01
We present very-high-energy γ-ray observations of the BL Lac object 1ES 2344+514 taken by the Very Energetic Radiation Imaging Telescope Array System between 2007 and 2015. 1ES 2344+514 is detected with a statistical significance above the background of 20.8σ in 47.2 h (livetime) of observations, making this the most comprehensive very-high-energy study of 1ES 2344+514 to date. Using these observations, the temporal properties of 1ES 2344+514 are studied on short and long times-scales. We fit a constant-flux model to nightly and seasonally binned light curves and apply a fractional variability test to determine the stability of the source on different time-scales. We reject the constant-flux model for the 2007-2008 and 2014-2015 nightly binned light curves and for the long-term seasonally binned light curve at the >3σ level. The spectra of the time-averaged emission before and after correction for attenuation by the extragalactic background light are obtained. The observed time-averaged spectrum above 200 GeV is satisfactorily fitted (χ2/NDF = 7.89/6) by a power-law function with an index Γ = 2.46 ± 0.06stat ± 0.20sys and extends to at least 8 TeV. The extragalactic-background-light-deabsorbed spectrum is adequately fit (χ2/NDF = 6.73/6) by a power-law function with an index Γ = 2.15 ± 0.06stat ± 0.20sys while an F-test indicates that the power law with an exponential cut-off function provides a marginally better fit (χ2/NDF = 2.56/5) at the 2.1σ level. The source location is found to be consistent with the published radio location and its spatial extent is consistent with a point source.
Friggens, N C; Badsberg, J H
2007-05-01
The objectives of this study were to see if the body condition score curve during lactation could be described using a model amenable to biological interpretation, a non-linear function assuming exponential rates of change in body condition with time, and to quantify the effect of breed and parity on curves of body condition during lactation. Three breeds were represented: Danish Holstein (n = 112), Danish Red (n = 97) and Jerseys (n = 8). Cows entered the experiment at the start of first lactation and were studied during consecutive lactations (average number of lactations 2, minimum 1, maximum 3). They remained on the same dietary treatment throughout. Body condition was scored to the nearest half unit on the Danish scale (see Kristensen (1986); derived from the Lowman et al. (1976) system) from 1 to 5 on days: 2, 14, 28, 42, 56, 84, 112, 168, 224 after calving. Additionally, condition score was recorded on the day of drying off the cow, 35, 21, and 7 days before expected calving and finally on the day of calving. All condition scores were made by the trained personal on the research farm, where the same person made 92% of the scores. The temporal patterns in condition score were modelled as consisting of two underlying processes, one related to days from calving, referred to as lactation only, the other to days from (subsequent) conception, referred to as pregnancy. Both processes were assumed to be exponential functions of time. Each process was modelled separately using exponential functions, i.e. one model for lactation only and one for pregnancy, and then a combined model for both lactation only and pregnancy was fitted. The data set contained 467 lactation periods and 378 pregnancy periods. The temporal patterns in condition score of cows kept under stable and sufficient nutritional conditions were successfully described using a two component non-linear function. First lactation cows had shallower curves, they had greater condition scores at the nadir of the curve. Danish Holstein and Jersey were thinner at the end of the mobilisation period having lost more body condition than the Danish Red breed. Although the dairy breeds ended up being thinner there were no significant differences in the rate at which they lost body condition.
Properties of single NMDA receptor channels in human dentate gyrus granule cells
Lieberman, David N; Mody, Istvan
1999-01-01
Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689
Bodunov, E N; Antonov, Yu A; Simões Gamboa, A L
2017-03-21
The non-exponential room temperature luminescence decay of colloidal quantum dots is often well described by a stretched exponential function. However, the physical meaning of the parameters of the function is not clear in the majority of cases reported in the literature. In this work, the room temperature stretched exponential luminescence decay of colloidal quantum dots is investigated theoretically in an attempt to identify the underlying physical mechanisms associated with the parameters of the function. Three classes of non-radiative transition processes between the excited and ground states of colloidal quantum dots are discussed: long-range resonance energy transfer, multiphonon relaxation, and contact quenching without diffusion. It is shown that multiphonon relaxation cannot explain a stretched exponential functional form of the luminescence decay while such dynamics of relaxation can be understood in terms of long-range resonance energy transfer to acceptors (molecules, quantum dots, or anharmonic molecular vibrations) in the environment of the quantum dots acting as energy-donors or by contact quenching by acceptors (surface traps or molecules) distributed statistically on the surface of the quantum dots. These non-radiative transition processes are assigned to different ranges of the stretching parameter β.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
The eigenmode perspective of NMR spin relaxation in proteins
NASA Astrophysics Data System (ADS)
Shapiro, Yury E.; Meirovitch, Eva
2013-12-01
We developed in recent years the two-body (protein and probe) coupled-rotator slowly relaxing local structure (SRLS) approach for elucidating protein dynamics from NMR spin relaxation. So far we used as descriptors the set of physical parameters that enter the SRLS model. They include the global (protein-related) diffusion tensor, D1, the local (probe-related) diffusion tensor, D2, and the local coupling/ordering potential, u. As common in analyzes based on mesoscopic dynamic models, these parameters have been determined with data-fitting techniques. In this study, we describe structural dynamics in terms of the eigenmodes comprising the SRLS time correlation functions (TCFs) generated by using the best-fit parameters as input to the Smoluchowski equation. An eigenmode is a weighted exponential with decay constant given by an eigenvalue of the Smoluchowski operator, and weighting factor determined by the corresponding eigenvector. Obviously, both quantities depend on the SRLS parameters as determined by the SRLS model. Unlike the set of best-fit parameters, the eigenmodes represent patterns of motion of the probe-protein system. The following new information is obtained for the typical probe, the 15N-1H bond. Two eigenmodes, associated with the protein and the probe, dominate when the time scale separation is large (i.e., D2 ≫ D1), the tensorial properties are simple, and the local potential is either very strong or very weak. When the potential exceeds these limits while the remaining conditions are preserved, new eigenmodes arise. The multi-exponentiality of the TCFs is associated in this case with the restricted nature of the local motion. When the time scale separation is no longer large, the rotational degrees of freedom of the protein and the probe become statistically dependent (coupled dynamically). The multi-exponentiality of the TCFs is associated in this case with the restricted nature of both the local and the global motion. The effects of local diffusion axiality, potential strength, and extent of mode-coupling on the eigenmode setup are investigated. We detect largely global motional or largely local motional eigenmodes. In addition, we detect mixed eigenmodes associated with correlated/prograde or anti-correlated/retrograde rotations of the global (D1) and local (D2) motional modes. The eigenmode paradigm is applied to N-H bond dynamics in the β-sheet residue K19, and the α-helix residue A34, of the third immunoglobulin-binding domain of streptococcal protein G. The largest contribution to the SRLS TCFs is made by mixed anti-correlated D1 and D2 eigenmodes. The next largest contribution is made by D1-dominated eigenmodes. Eigenmodes dominated by the local motion contribute appreciably to A34 and marginally to K19. Correlated D1 and D2 eigenmodes contribute exclusively to K19 and do not contribute above 1% to A34. The differences between K19 and A34 are delineated and rationalized in terms of the best-fit SRLS parameters and mode-mixing. It may be concluded that eigenmode analysis is complementary and supplementary to data-fitting-based analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.
Purpose: The Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC 2010) survey of radiation dose-volume effects on salivary gland function has called for improved understanding of intragland dose sensitivity and the effectiveness of partial sparing in salivary glands. Regional dose susceptibility of sagittally- and coronally-sub-segmented parotid gland has been studied. Specifically, we examine whether individual consideration of sub-segments leads to improved prediction of xerostomia compared with whole parotid mean dose. Methods: Data from 102 patients treated for head-and-neck cancers at the BC Cancer Agency were used in this study. Whole mouth stimulated saliva was collected before (baseline), threemore » months, and one year after cessation of radiotherapy. Organ volumes were contoured using treatment planning CT images and sub-segmented into regional portions. Both non-parametric (local regression) and parametric (mean dose exponential fitting) methods were employed. A bootstrap technique was used for reliability estimation and cross-comparison. Results: Salivary loss is described well using non-parametric and mean dose models. Parametric fits suggest a significant distinction in dose response between medial-lateral and anterior-posterior aspects of the parotid (p<0.01). Least-squares and least-median squares estimates differ significantly (p<0.00001), indicating fits may be skewed by noise or outliers. Salivary recovery exhibits a weakly arched dose response: the highest recovery is seen at intermediate doses. Conclusions: Salivary function loss is strongly dose dependent. In contrast no useful dose dependence was observed for function recovery. Regional dose dependence was observed, but may have resulted from a bias in dose distributions.« less
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Characterization of radiation belt electron energy spectra from CRRES observations
NASA Astrophysics Data System (ADS)
Johnston, W. R.; Lindstrom, C. D.; Ginet, G. P.
2010-12-01
Energetic electrons in the outer radiation belt and the slot region exhibit a wide variety of energy spectral forms, more so than radiation belt protons. We characterize the spatial and temporal dependence of these forms using observations from the CRRES satellite Medium Electron Sensor A (MEA) and High-Energy Electron Fluxmeter (HEEF) instruments, together covering an energy range 0.15-8 MeV. Spectra were classified with two independent methods, data clustering and curve-fitting analyses, in each case defining categories represented by power law, exponential, and bump-on-tail (BOT) or other complex shapes. Both methods yielded similar results, with BOT, exponential, and power law spectra respectively dominating in the slot region, outer belt, and regions just beyond the outer belt. The transition from exponential to power law spectra occurs at higher L for lower magnetic latitude. The location of the transition from exponential to BOT spectra is highly correlated with the location of the plasmapause. In the slot region during the days following storm events, electron spectra were observed to evolve from exponential to BOT yielding differential flux minima at 350-650 keV and maxima at 1.5-2 MeV; such evolution has been attributed to energy-dependent losses from scattering by whistler hiss.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
NASA Astrophysics Data System (ADS)
Yang, Jiefan; Lei, Hengchi
2016-02-01
Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.
Water quality trend analysis for the Karoon River in Iran.
Naddafi, K; Honari, H; Ahmadi, M
2007-11-01
The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.
An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood
NASA Astrophysics Data System (ADS)
Dinh, Khanh N.; Sidje, Roger B.
2017-12-01
Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.
Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J
2013-10-01
The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Lee, Peter N; Fry, John S; Thornton, Alison J
2014-02-01
We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Beyond the power law: Uncovering stylized facts in interbank networks
NASA Astrophysics Data System (ADS)
Vandermarliere, Benjamin; Karas, Alexei; Ryckebusch, Jan; Schoors, Koen
2015-06-01
We use daily data on bilateral interbank exposures and monthly bank balance sheets to study network characteristics of the Russian interbank market over August 1998-October 2004. Specifically, we examine the distributions of (un)directed (un)weighted degree, nodal attributes (bank assets, capital and capital-to-assets ratio) and edge weights (loan size and counterparty exposure). We search for the theoretical distribution that fits the data best and report the "best" fit parameters. We observe that all studied distributions are heavy tailed. The fat tail typically contains 20% of the data and can be mostly described well by a truncated power law. Also the power law, stretched exponential and log-normal provide reasonably good fits to the tails of the data. In most cases, however, separating the bulk and tail parts of the data is hard, so we proceed to study the full range of the events. We find that the stretched exponential and the log-normal distributions fit the full range of the data best. These conclusions are robust to (1) whether we aggregate the data over a week, month, quarter or year; (2) whether we look at the "growth" versus "maturity" phases of interbank market development; and (3) with minor exceptions, whether we look at the "normal" versus "crisis" operation periods. In line with prior research, we find that the network topology changes greatly as the interbank market moves from a "normal" to a "crisis" operation period.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
Velocity storage contribution to vestibular self-motion perception in healthy human subjects.
Bertolini, G; Ramat, S; Laurens, J; Bockisch, C J; Marti, S; Straumann, D; Palla, A
2011-01-01
Self-motion perception after a sudden stop from a sustained rotation in darkness lasts approximately as long as reflexive eye movements. We hypothesized that, after an angular velocity step, self-motion perception and reflexive eye movements are driven by the same vestibular pathways. In 16 healthy subjects (25-71 years of age), perceived rotational velocity (PRV) and the vestibulo-ocular reflex (rVOR) after sudden decelerations (90°/s(2)) from constant-velocity (90°/s) earth-vertical axis rotations were simultaneously measured (PRV reported by hand-lever turning; rVOR recorded by search coils). Subjects were upright (yaw) or 90° left-ear-down (pitch). After both yaw and pitch decelerations, PRV rose rapidly and showed a plateau before decaying. In contrast, slow-phase eye velocity (SPV) decayed immediately after the initial increase. SPV and PRV were fitted with the sum of two exponentials: one time constant accounting for the semicircular canal (SCC) dynamics and one time constant accounting for a central process, known as velocity storage mechanism (VSM). Parameters were constrained by requiring equal SCC time constant and VSM time constant for SPV and PRV. The gains weighting the two exponential functions were free to change. SPV were accurately fitted (variance-accounted-for: 0.85 ± 0.10) and PRV (variance-accounted-for: 0.86 ± 0.07), showing that SPV and PRV curve differences can be explained by a greater relative weight of VSM in PRV compared with SPV (twofold for yaw, threefold for pitch). These results support our hypothesis that self-motion perception after angular velocity steps is be driven by the same central vestibular processes as reflexive eye movements and that no additional mechanisms are required to explain the perceptual dynamics.
Gotow, Naomi; Moritani, Ami; Hayakawa, Yoshinobu; Akutagawa, Akihito; Hashimoto, Hiroshi; Kobayakawa, Tatsu
2015-06-01
In order to develop products that are acceptable to consumers, it is necessary to incorporate consumers' intentions into products' characteristics. Therefore, investigation of consumers' perceptions of the taste or smell of common beverages provides information that should be useful in predicting market responses. In this study, we sought to develop a time-intensity evaluation system for consumer panels. Using our system, we performed time-intensity evaluation of flavor attributes (bitterness and retronasal aroma) that consumers perceived after swallowing a coffee beverage. Additionally, we developed quantitative evaluation methods for determining whether consumer panelists can properly perform time-intensity evaluation. In every trial, we fitted an exponential function to measured intensity data for bitterness and retronasal aroma. The correlation coefficients between measured time-intensity data and the fitted exponential curves were greater than 0.8 in about 90% of trials, indicating that we had successfully developed a time-intensity system for use with consumer panelists, even after just a single training trial using a nontrained consumer. We classified participants into two groups based on their consumption of canned coffee beverages. Significant difference was observed in only AUC of sensory modality (bitterness compared with retronasal aroma) among conventional TI parameters using two-way ANOVA. However, three-way ANOVA including a time course revealed significant difference between bitterness and retronasal aroma in the high-consumption group. Moreover, the high-consumption group more easily discriminated between bitterness and retronasal aroma than the low-consumption group. This finding implied that manufacturers should select consumer panelists who are suitable for their concepts of new products. © 2015 Institute of Food Technologists®
A study of hyperelastic models for predicting the mechanical behavior of extensor apparatus.
Elyasi, Nahid; Taheri, Kimia Karimi; Narooei, Keivan; Taheri, Ali Karimi
2017-06-01
In this research, the nonlinear elastic behavior of human extensor apparatus was investigated. To this goal, firstly the best material parameters of hyperelastic strain energy density functions consisting of the Mooney-Rivlin, Ogden, invariants, and general exponential models were derived for the simple tension experimental data. Due to the significance of stress response in other deformation modes of nonlinear models, the calculated parameters were used to study the pure shear and balance biaxial tension behavior of the extensor apparatus. The results indicated that the Mooney-Rivlin model predicts an unstable behavior in the balance biaxial deformation of the extensor apparatus, while the Ogden order 1 represents a stable behavior, although the fitting of experimental data and theoretical model was not satisfactory. However, the Ogden order 6 model was unstable in the simple tension mode and the Ogden order 5 and general exponential models presented accurate and stable results. In order to reduce the material parameters, the invariants model with four material parameters was investigated and this model presented the minimum error and stable behavior in all deformation modes. The ABAQUS Explicit solver was coupled with the VUMAT subroutine code of the invariants model to simulate the mechanical behavior of the central and terminal slips of the extensor apparatus during the passive finger flexion, which is important in the prediction of boutonniere deformity and chronic mallet finger injuries, respectively. Also, to evaluate the adequacy of constitutive models in simulations, the results of the Ogden order 5 were presented. The difference between the predictions was attributed to the better fittings of the invariants model compared with the Ogden model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
An hourglass model for the flare of HST-1 in M87
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wen-Po; Zhao, Guang-Yao; Chen, Yong Jun
To explain the multi-wavelength light curves (from radio to X-ray) of HST-1 in the M87 jet, we propose an hourglass model that is a modified two-zone system of Tavecchio and Ghisellini (hereafter TG08): a slow hourglass-shaped or Laval-nozzle-shaped layer connected by two revolving exponential surfaces surrounding a fast spine through which plasma blobs flow. Based on the conservation of magnetic flux, the magnetic field changes along the axis of the hourglass. We adopt the result of TG08—the high-energy emission from GeV to TeV can be produced through inverse Compton by the two-zone system, and the photons from radio to X-raymore » are mainly radiated by the fast inner zone system. Here, we only discuss the light curves of the fast inner blob from radio to X-ray. When a compressible blob travels down the axis of the first bulb in the hourglass, because of magnetic flux conservation, its cross section experiences an adiabatic compression process, which results in particle acceleration and the brightening of HST-1. When the blob moves into the second bulb of the hourglass, because of magnetic flux conservation, the dimming of the knot occurs along with an adiabatic expansion of its cross section. A similar broken exponential function could fit the TeV peaks in M87, which may imply a correlation between the TeV flares of M87 and the light curves from radio to X-ray in HST-1. The Very Large Array (VLA) 22 GHz radio light curve of HST-1 verifies our prediction based on the model fit to the main peak of the VLA 15 GHz radio one.« less
Shu-Jiang, Liu; Zhan-Ying, Chen; Yin-Zhong, Chang; Shi-Lian, Wang; Qi, Li; Yuan-Qing, Fan
2013-10-11
Multidimensional gas chromatography is widely applied to atmospheric xenon monitoring for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). To improve the capability for xenon sampling from the atmosphere, sampling techniques have been investigated in detail. The sampling techniques are designed by xenon outflow curves which are influenced by many factors, and the injecting condition is one of the key factors that could influence the xenon outflow curves. In this paper, the xenon outflow curves of single-pulse injection in two-dimensional gas chromatography has been tested and fitted as a function of exponential modified Gaussian distribution. An inference formula of the xenon outflow curve for six-pulse injection is derived, and the inference formula is also tested to compare with its fitting formula of the xenon outflow curve. As a result, the curves of both the one-pulse and six-pulse injections obey the exponential modified Gaussian distribution when the temperature of the activated carbon column's temperature is 26°C and the flow rate of the carrier gas is 35.6mLmin(-1). The retention time of the xenon peak for one-pulse injection is 215min, and the peak width is 138min. For the six-pulse injection, however, the retention time is delayed to 255min, and the peak width broadens to 222min. According to the inferred formula of the xenon outflow curve for the six-pulse injection, the inferred retention time is 243min, the relative deviation of the retention time is 4.7%, and the inferred peak width is 225min, with a relative deviation of 1.3%. Copyright © 2013 Elsevier B.V. All rights reserved.
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Lohmann, W
1978-01-01
The shape of the survivorship curve can easily be interpreted on condition that the probability of death is proportional to an exponentially rising function of ageing. According to the formation of a sum for determining of the age index by Ries it was investigated to what extent the survivorship curve may be approximated by a sum of exponentials. It follows that the difference between the pure exponential function and a sum of exponentials by using possible values is lying within the random variation. Because the probability of death for different diseases is variable, the new statement is a better one.
Photometric normalization of LROC WAC images
NASA Astrophysics Data System (ADS)
Sato, H.; Denevi, B.; Robinson, M. S.; Hapke, B. W.; McEwen, A. S.; LROC Science Team
2010-12-01
The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) acquires near global coverage on a monthly basis. The WAC is a push frame sensor with a 90° field of view (FOV) in BW mode and 60° FOV in 7-color mode (320 nm to 689 nm). WAC images are acquired during each orbit in 10° latitude segments with cross track coverage of ~50 km. Before mosaicking, WAC images are radiometrically calibrated to remove instrumental artifacts and to convert at sensor radiance to I/F. Images are also photometrically normalized to common viewing and illumination angles (30° phase), a challenge due to the wide angle nature of the WAC where large differences in phase angle are observed in a single image line (±30°). During a single month the equatorial incidence angle drifts about 28° and over the course of ~1 year the lighting completes a 360° cycle. The light scattering properties of the lunar surface depend on incidence(i), emission(e), and phase(p) angles as well as soil properties such as single-scattering albedo and roughness that vary with terrain type and state of maturity [1]. We first tested a Lommel-Seeliger Correction (LSC) [cos(i)/(cos(i) + cos(e))] [2] with a phase function defined by an exponential decay plus 4th order polynomial term [3] which did not provide an adequate solution. Next we employed a LSC with an exponential 2nd order decay phase correction that was an improvement, but still exhibited unacceptable frame-to-frame residuals. In both cases we fitted the LSC I/F vs. phase angle to derive the phase corrections. To date, the best results are with a lunar-lambert function [4] with exponential 2nd order decay phase correction (LLEXP2) [(A1exp(B1p)+A2exp(B2p)+A3) * cos(i)/(cos(e) + cos(i)) + B3cos(i)]. We derived the parameters for the LLEXP2 from repeat imaging of a small region and then corrected that region with excellent results. When this correction was applied to the whole Moon the results were less than optimal - no surprise given the variability of the regolith from region to region. As the fitting area increases, the accuracy of curve fitting decreases due to the larger variety of albedo, topography, and composition. Thus we have adopted an albedo-dependent photometric normalization routine. Phase curves are derived for discreet bins of preliminary normalized reflectance calculated from Clementine global mosaic in a fitting area that is composed of predominantly mare in Oceanus Procellarum. The global WAC mosaic was then corrected pixel-by-pixel according to its preliminary reflectance map with satisfactory results. We observed that the phase curves per normalized-reflectance bins become steeper as the reflectance value increases. Further filtering by using FeO, TiO2, or optical maturity [5] for parameter calculations may help elucidate the effects of surface composition and maturity on photometric properties of the surface. [1] Hapke, B.W. (1993) Theory of Reflectance and Emittance Spectroscopy, Cambridge Univ. Press. [2] Schoenberg (1925) Ada. Soc. Febb., vol. 50. [3] Hillier et al. (1999) Icarus 141, 205-225. [4] McEwen (1991) Icarus 92, 298-311. [5] Lucey et al. (2000) JGR, v105, no E8, p20377-20386.
The Mass Distribution of Stellar-mass Black Holes
NASA Astrophysics Data System (ADS)
Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky
2011-11-01
We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
NASA Astrophysics Data System (ADS)
Du, Qishi; Mezey, Paul G.
1998-09-01
In this research we test and compare three possible atom-basedscreening functions used in the heuristic molecular lipophilicity potential(HMLP). Screening function 1 is a power distance-dependent function, b_{{i}} /| {R_{{i}}- r} |^γ, screening function 2is an exponential distance-dependent function, biexp(-| {R_i- r} |/d_0 , and screening function 3 is aweighted distance-dependent function, {{sign}}( {b_i } ){{exp}}ξ ( {| {R_i- r} |/| {b_i } |} )For every screening function, the parameters (γ ,d0, and ξ are optimized using 41 common organic molecules of 4 types of compounds:aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, andaliphatic alkanes. The results of calculations show that screening function3 cannot give chemically reasonable results, however, both the powerscreening function and the exponential screening function give chemicallysatisfactory results. There are two notable differences between screeningfunctions 1 and 2. First, the exponential screening function has largervalues in the short distance than the power screening function, thereforemore influence from the nearest neighbors is involved using screeningfunction 2 than screening function 1. Second, the power screening functionhas larger values in the long distance than the exponential screeningfunction, therefore screening function 1 is effected by atoms at longdistance more than screening function 2. For screening function 1, thesuitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this researchprovides a potential tool for computer-aided three-dimensional drugdesign.
A comparison of methods of fitting several models to nutritional response data.
Vedenov, D; Pesti, G M
2008-02-01
A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.
Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.
Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng
2013-01-01
Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.
Insulation Resistance Degradation in Ni-BaTiO3 Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang (David)
2015-01-01
Insulation resistance (IR) degradation in Ni-BaTiO3 multilayer ceramic capacitors has been characterized by the measurement of both time to failure and direct-current (DC) leakage current as a function of stress time under highly accelerated life test conditions. The measured leakage current-time dependence data fit well to an exponential form, and a characteristic growth time ?SD can be determined. A greater value of tau(sub SD) represents a slower IR degradation process. Oxygen vacancy migration and localization at the grain boundary region results in the reduction of the Schottky barrier height and has been found to be the main reason for IR degradation in Ni-BaTiO3 capacitors. The reduction of barrier height as a function of time follows an exponential relation of phi (??)=phi (0)e(exp -2?t), where the degradation rate constant ??=??o??(????/????) is inversely proportional to the mean time to failure (MTTF) and can be determined using an Arrhenius plot. For oxygen vacancy electromigration, a lower barrier height phi(0) will favor a slow IR degradation process, but a lower phi(0) will also promote electronic carrier conduction across the barrier and decrease the insulation resistance. As a result, a moderate barrier height phi(0) (and therefore a moderate IR value) with a longer MTTF (smaller degradation rate constant ??) will result in a minimized IR degradation process and the most improved reliability in Ni-BaTiO3 multilayer ceramic capacitors.
Wilczyński, Sławomir; Pilawa, Barbara; Koprowski, Robert; Wróbel, Zygmunt; Ptaszkiewicz, Marta; Swakoń, Jan; Olko, Paweł
2012-02-14
Radiation sterilization technology is more actively used now that any time because of its many advantages. Gamma radiation has high penetrating power, relatively low chemical reactivity and causes small temperature rise. But on the other hand radiosterilization can lead to radiolytic products appearing, in example free radicals. Free radicals in radiative sterilized sisomicin, tobramycin and paromomycin were studied by electron paramagnetic resonance (EPR) spectroscopy. Dose of gamma irradiation of 25kGy was used. Concentrations and properties of free radicals in irradiated antibiotics were studied. EPR spectra were recorded for samples stored in air and argon. For gamma irradiated antibiotics strong EPR lines were recorded. One- and two-exponential functions were fitted to experimental points during testing and researching of time influence of the antibiotics storage to studied parameters of EPR lines. Our study of free radicals in radiosterilized antibiotics indicates the need for characterization of medicinal substances prior to sterilization process using EPR values. We propose the concentration of free radicals and other spectroscopic parameters as useful factors to select the optimal type of sterilization for the individual drug. The important parameters are i.a. the τ time constants and K constants of exponential functions. Time constants τ give us information about the speed of free radicals concentration decrease in radiated medicinal substances. The constant K(0) shows the free radicals concentration in irradiated medicament after long time of storage. Copyright © 2011 Elsevier B.V. All rights reserved.
Insulation Resistance Degradation in Ni-BaTiO3 Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang David
2015-01-01
Insulation resistance (IR) degradation in NiBaTiO3 multilayer ceramic capacitors has been characterized by the measurement of both time to failure (TTF) and direct current leakage current as a function of stress time under highly accelerated life test conditions. The measured leakage current time dependence data fit well to an exponential form, and a characteristic growth time tau (sub SD) can be determined. A greater value of tau (sub SD) represents a slower IR degradation process. Oxygen vacancy migration and localization at the grain boundary region results in the reduction of the Schottky barrier height and has been found to be the main reason for IR degradation in NiBaTiO3 capacitors. The reduction of barrier height as a function oftime follows an exponential relation of phi (t ) = phi (0) e (exp -2Kt), where 13 the degradation rate constant K Koe (Ek/kT) is inversely proportional to the mean TTF (MTTF) and can be determined using an Arrhenius plot. For oxygen vacancy electromigration, a lower barrier height phi (0) will favor a slow IR degradation process, but a lower phi (0) will also promote electronic carrier conduction across the barrier and decrease the IR. As a result, a moderate barrier height phi (0) (and therefore a moderate IR value) with a longer MTTF (smaller degradation rate constant K) will result in a minimized IR degradation process and the most improved reliability in NiBaTiO3 multilayer ceramic capacitors.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1993-01-01
The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
Non-inverse-square force-distance law for long thin magnets-revisited.
Darvell, Brian W; Gilding, Brian H
2012-05-01
It had previously been shown that the inverse-square law does not apply to the force-distance relationship in the case of a long, thin magnet with one end in close proximity to its image in a permeable plane when simple point-like poles are assumed. Treating the system instead as having a 'polar disc', arising from an assumed bundle of dipoles, led to a double integral that could only be evaluated numerically, and a relationship that still did not match observed behavior. Using an elaborate 'stretched' exponential polynomial to represent the position of an 'elastic' polar disc resulted in a fair representation of the physical response, but this was essentially merely the fitting of an arbitrary function. The present purpose was therefore to find an explicit formula for the force-distance relationship in the polar-disc problem and assess its fit to the previously obtained experimental data. Starting from Coulomb's law a corrected integral formula for the force-distance relationship was derived. The integral in this formula was evaluated explicitly using rescaling, changes of order of integration, reduction by symmetry, and change of variables. The resulting formula was then fitted to data that had been obtained for the force exerted by eighty-five rod-shaped magnets (Alnico V, 3 mm diameter, 170 mm long) perpendicular to a large steel plate, as a function of distance, at small separations (<5 mm). Subsequently, the fit of alternative functions was explored. An explicit formula in terms of elliptic integrals was obtained for the polar-disc problem. Despite the greater fidelity, this too was found not to fit the observed physical behavior. Given that failure, nevertheless a simple formula that conforms closely and parsimoniously to the actual magnet data was found. A key feature remains the marked departure from inverse-square behavior. The failure of the explicit formula to fit the data indicates an inadequate model of the physical system. Nonetheless it constitutes a useful tool for quantifying the force-distance relationship on the premise of polar discs. Given these insights, it may now be possible to address the original motivating problem of the behavior of real dental magnets. Copyright © 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Concepción-Acevedo, Jeniffer; Weiss, Howard N; Chaudhry, Waqas Nasir; Levin, Bruce R
2015-01-01
The maximum exponential growth rate, the Malthusian parameter (MP), is commonly used as a measure of fitness in experimental studies of adaptive evolution and of the effects of antibiotic resistance and other genes on the fitness of planktonic microbes. Thanks to automated, multi-well optical density plate readers and computers, with little hands-on effort investigators can readily obtain hundreds of estimates of MPs in less than a day. Here we compare estimates of the relative fitness of antibiotic susceptible and resistant strains of E. coli, Pseudomonas aeruginosa and Staphylococcus aureus based on MP data obtained with automated multi-well plate readers with the results from pairwise competition experiments. This leads us to question the reliability of estimates of MP obtained with these high throughput devices and the utility of these estimates of the maximum growth rates to detect fitness differences.
Research on modified the estimates of NOx emissions combined the OMI and ground-based DOAS technique
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Li*, Ang; Xie, Pinhua; Hu, Zhaokun; Wu, Fengcheng; Xu, Jin
2017-04-01
A new method to calibrate nitrogen dioxide (NO2) lifetimes and emissions from point sources using satellite measurements base on the mobile passive differential optical absorption spectroscopy (DOAS) and multi axis differential optical absorption spectroscopy (MAX-DOAS) is described. It is based on using the Exponentially-Modified Gaussian (EMG) fitting method to correct the line densities along the wind direction by fitting the mobile passive DOAS NO2 vertical column density (VCD). An effective lifetime and emission rate are then determined from the parameters of the fit. The obtained results were then compared with the results acquired by fitting OMI (Ozone Monitoring Instrument) NO2 using the above fitting method, the NOx emission rate was about 195.8mol/s, 160.6mol/s, respectively. The reason why the latter less than the former may be because the low spatial resolution of the satellite.
Initial mass function of planetesimals formed by the streaming instability
NASA Astrophysics Data System (ADS)
Schäfer, Urs; Yang, Chao-Chin; Johansen, Anders
2017-01-01
The streaming instability is a mechanism to concentrate solid particles into overdense filaments that undergo gravitational collapse and form planetesimals. However, it remains unclear how the initial mass function of these planetesimals depends on the box dimensions of numerical simulations. To resolve this, we perform simulations of planetesimal formation with the largest box dimensions to date, allowing planetesimals to form simultaneously in multiple filaments that can only emerge within such large simulation boxes. In our simulations, planetesimals with sizes between 80 km and several hundred kilometers form. We find that a power law with a rather shallow exponential cutoff at the high-mass end represents the cumulative birth mass function better than an integrated power law. The steepness of the exponential cutoff is largely independent of box dimensions and resolution, while the exponent of the power law is not constrained at the resolutions we employ. Moreover, we find that the characteristic mass scale of the exponential cutoff correlates with the mass budget in each filament. Together with previous studies of high-resolution simulations with small box domains, our results therefore imply that the cumulative birth mass function of planetesimals is consistent with an exponentially tapered power law with a power-law exponent of approximately -1.6 and a steepness of the exponential cutoff in the range of 0.3-0.4.
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Microcomputer Calculation of Theoretical Pre-Exponential Factors for Bimolecular Reactions.
ERIC Educational Resources Information Center
Venugopalan, Mundiyath
1991-01-01
Described is the application of microcomputers to predict reaction rates based on theoretical atomic and molecular properties taught in undergraduate physical chemistry. Listed is the BASIC program which computes the partition functions for any specific bimolecular reactants. These functions are then used to calculate the pre-exponential factor of…
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mantha, Sriteja; Yethiraj, Arun
2016-02-24
The properties of water under confinement are of practical and fundamental interest. Here in this work we study the properties of water in the self-assembled lyotropic phases of gemini surfactants with a focus on testing the standard analysis of quasi-elastic neutron scattering (QENS) experiments. In QENS experiments the dynamic structure factor is measured and fit to models to extract the translational diffusion constant, D T , and rotational relaxation time, τ R. We test this procedure by using simulation results for the dynamic structure factor, extracting the dynamic parameters from the fit as is typically done in experiments, and comparingmore » the values to those directly measured in the simulations. We find that the decoupling approximation, where the intermediate scattering function is assumed to be a product of translational and rotational contributions, is quite accurate. The jump-diffusion and isotropic rotation models, however, are not accurate when the degree of confinement is high. In particular, the exponential approximations for the intermediate scattering function fail for highly confined water and the values of D T and τ R can differ from the measured value by as much as a factor of two. Other models have more fit parameters, however, and with the range of energies and wave-vectors accessible to QENS, the typical analysis appears to be the best choice. In the most confined lamellar phase, the dynamics are sufficiently slow that QENS does not access a large enough time scale and neutron spin echo measurements would be a valuable technique in addition to QENS.« less
Calvo-Gallego, Jose L; Domínguez, Jaime; Gómez Cía, Tomás; Gómez Ciriza, Gorka; Martínez-Reina, Javier
2018-04-01
Knowing the mechanical properties of human adipose tissue is key to simulate surgeries such as liposuction, mammoplasty and many plastic surgeries in which the subcutaneous fat is present. One of the most important surgeries, for its incidence, is the breast reconstruction surgery that follows a mastectomy. In this case, achieving a deformed shape similar to the healthy breast is crucial. The reconstruction is most commonly made using autologous tissue, taken from the patient's abdomen. The amount of autologous tissue and its mechanical properties have a strong influence on the shape of the reconstructed breast. In this work, the viscoelastic mechanical properties of the human adipose tissue have been studied. Uniaxial compression stress relaxation tests were performed in adipose tissue specimens extracted from the human abdomen. Two different viscoelastic models were used to fit to the experimental tests: a quasi-linear viscoelastic (QLV) model and an internal variables viscoelastic (IVV) model; each one with four different hyperelastic strain energy density functions to characterise the elastic response: a 5-terms polynomial function, a first order Ogden function, an isotropic Gasser-Ogden-Holzapfel function and a combination of a neoHookean and an exponential function. The IVV model with the Ogden function was the best combination to fit the experimental tests. The viscoelastic properties are not important in the simulation of the static deformed shape of the breast, but they are needed in a relaxation test performed under finite strain rate, particularly, to derive the long-term behaviour (as time tends to infinity), needed to estimate the static deformed shape of the breast. The so obtained stiffness was compared with previous results given in the literature for adipose tissue of different regions, which exhibited a wide dispersion. Copyright © 2018 Elsevier Ltd. All rights reserved.
Yuan, Yufeng; Huang, Kehan; Chang, Mengfang; Qin, Cuifang; Zhang, Sanjun; Pan, Haifeng; Chen, Yan; Xu, Jianhua
2016-02-01
Fluorescent silver nanoclusters (Ag NCs) displaying dual-excitation and dual-emission properties have been developed for the specific detection of NAD(+) (nicotinamide adenine dinucleotide, oxidized form). With the increase of NAD(+) concentrations, the longer wavelength emission (with the peak at 550 nm) was gradually quenched due to the strong interactions between the NAD(+) and Ag NCs, whereas the shorter wavelength emission (peaking at 395 nm) was linearly enhanced. More important, the dual-emission intensity ratio (I395/I550), fitting by a single-exponential decay function, can efficiently detect various NAD(+) levels from 100 to 4000 μM, as well as label NAD(+)/NADH (reduced form of NAD) ratios in the range of 1-50. Copyright © 2015 Elsevier Inc. All rights reserved.
Campbell, W.H.; Schiffmacher, E.R.
1986-01-01
Spherical harmonic analysis coefficients of the external and internal parts of the quiet-day geomagnetic field variations (Sq), separated for the N American, European, Central Asian and E Asian regions, were used to determine conductivity profiles to depths of about 600km by the Schmucker equivalent-substitute conductor method. All 3 regions showed a roughly exponential increase of conductivity with depth. Distinct discontinuities seemed to be evident near 255-300km and near 450-600km. Regional differences in the conductivity profiles were shown by the functional fittings to the data. For depths less than about 275km, the N American conductivities seemed to be significantly higher than the other regions. For depths greater than about 300km, the E Asian conductivities were largest. -Authors
1977-09-01
process with an event streaa intensity (rate) function that is of degree-two exponential pclyncaial foru. (The use of exponential pclynoaials is...4 \\v 01 ^3 C \\ \\ •r- S_ \\ \\ O \\ \\ a \\ \\ V IA C 4-> \\ \\ •«- c \\ 1 <— 3 • o \\ \\ Ol (J \\ \\ O U —1 <o \\ I...would serve as a good initial approxiaation t* , f-r the Newton-Raphson aethod. However, for the purpose of this implementation, the end point which
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Deopa, Nisha; Rao, A. S.; Gupta, Mohini; Vijaya Prakash, G.
2018-01-01
Neodymium doped lithium lead alumino borate glasses were synthesized with the molar composition 10Li2Osbnd 10PbOsbnd (10-x) Al2O3sbnd 70B2O3sbnd x Nd2O3 (where, x = 0.1, 0.5, 1.0, 1.5, 2.0 and 2.5 mol %) via conventional melt quenching technique to understand their lasing potentialities using the absorption, emission and photoluminescence decay spectral measurements. The oscillator strengths measured from the absorption spectra were used to estimate the Judd-Ofelt intensity parameters using least square fitting procedure. The emission spectra recorded for the as-prepared glasses under investigation exhibit two emission transitions 4F3/2 → 4I11/2 (1063 nm) and 4F3/2 → 4I9/2 (1350 nm) for which radiative parameters have been evaluated. The emission intensity increases with increase in Nd3+ ion concentration up to 1 mol % and beyond concentration quenching took place. The decay profile shows single exponential nature for lower Nd3+ ion concentration and non-exponential for higher concentration. To elucidate the nature of energy transfer process, the non-exponential decay curves were well fitted to Inokuti-Hirayama model. The relatively higher values of emission cross-sections, branching ratios and quantum efficiency values obtained for 1.0 mol% of Nd3+ ions in LiPbAlB glass suggests it's aptness in generating lasing action at 1063 nm in NIR region.
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2013-10-01
In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
ERIC Educational Resources Information Center
Budinski, Natalija; Subramaniam, Stephanie
2013-01-01
This paper shows how GeoGebra--a dynamic mathematics software--can be used to experiment, visualize and connect various concepts such as function, first derivative, slope, and tangent line. Students were given an assignment to determine the first derivative of the exponential function that they solved while experimenting with GeoGebra. GeoGebra…
Approximating exponential and logarithmic functions using polynomial interpolation
NASA Astrophysics Data System (ADS)
Gordon, Sheldon P.; Yang, Yajun
2017-04-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.
Atlas, Glen; Li, John K-J; Amin, Shawn; Hahn, Robert G
2017-01-01
A closed-form integro-differential equation (IDE) model of plasma dilution (PD) has been derived which represents both the intravenous (IV) infusion of crystalloid and the postinfusion period. Specifically, PD is mathematically represented using a combination of constant ratio, differential, and integral components. Furthermore, this model has successfully been applied to preexisting data, from a prior human study, in which crystalloid was infused for a period of 30 minutes at the beginning of thyroid surgery. Using Euler's formula and a Laplace transform solution to the IDE, patients could be divided into two distinct groups based on their response to PD during the infusion period. Explicitly, Group 1 patients had an infusion-based PD response which was modeled using an exponentially decaying hyperbolic sine function, whereas Group 2 patients had an infusion-based PD response which was modeled using an exponentially decaying trigonometric sine function. Both Group 1 and Group 2 patients had postinfusion PD responses which were modeled using the same combination of hyperbolic sine and hyperbolic cosine functions. Statistically significant differences, between Groups 1 and 2, were noted with respect to the area under their PD curves during both the infusion and postinfusion periods. Specifically, Group 2 patients exhibited a response to PD which was most likely consistent with a preoperative hypovolemia. Overall, this IDE model of PD appears to be highly "adaptable" and successfully fits clinically-obtained human data on a patient-specific basis, during both the infusion and postinfusion periods. In addition, patient-specific IDE modeling of PD may be a useful adjunct in perioperative fluid management and in assessing clinical volume kinetics, of crystalloid solutions, in real time.
Atlas, Glen; Li, John K-J; Amin, Shawn; Hahn, Robert G
2017-01-01
A closed-form integro-differential equation (IDE) model of plasma dilution (PD) has been derived which represents both the intravenous (IV) infusion of crystalloid and the postinfusion period. Specifically, PD is mathematically represented using a combination of constant ratio, differential, and integral components. Furthermore, this model has successfully been applied to preexisting data, from a prior human study, in which crystalloid was infused for a period of 30 minutes at the beginning of thyroid surgery. Using Euler’s formula and a Laplace transform solution to the IDE, patients could be divided into two distinct groups based on their response to PD during the infusion period. Explicitly, Group 1 patients had an infusion-based PD response which was modeled using an exponentially decaying hyperbolic sine function, whereas Group 2 patients had an infusion-based PD response which was modeled using an exponentially decaying trigonometric sine function. Both Group 1 and Group 2 patients had postinfusion PD responses which were modeled using the same combination of hyperbolic sine and hyperbolic cosine functions. Statistically significant differences, between Groups 1 and 2, were noted with respect to the area under their PD curves during both the infusion and postinfusion periods. Specifically, Group 2 patients exhibited a response to PD which was most likely consistent with a preoperative hypovolemia. Overall, this IDE model of PD appears to be highly “adaptable” and successfully fits clinically-obtained human data on a patient-specific basis, during both the infusion and postinfusion periods. In addition, patient-specific IDE modeling of PD may be a useful adjunct in perioperative fluid management and in assessing clinical volume kinetics, of crystalloid solutions, in real time. PMID:29123436
NASA Astrophysics Data System (ADS)
Clarage, James Braun, II
1990-01-01
Methods have been developed for analyzing the diffuse x-ray scattering in the halos about a crystal's Bragg reflections as a means of determining correlations in atomic displacements in protein crystals. The diffuse intensity distribution for rhombohedral insulin, tetragonal lysozyme, and triclinic lysozyme crystals was best simulated in terms of exponential displacement correlation functions. About 90% of the disorder can be accounted for by internal movements correlated with a decay distance of about 6A; the remaining 10% corresponds to intermolecular movements that decay in a distance the order of size of the protein molecule. The results demonstrate that protein crystals fit into neither the Einstein nor the Debye paradigms for thermally fluctuating crystalline solids. Unlike the Einstein model, there are correlations in the atomic displacements, but these correlations decay more steeply with distance than predicted by the Debye-Waller model for an elastic solid. The observed displacement correlations are liquid -like in the sense that they decay exponentially with the distance between atoms, just as positional correlations in a liquid. This liquid-like disorder is similar to the disorder observed in 2-D crystals of polystyrene latex spheres, and similar systems where repulsive interactions dominate; hence, these colloidal crystals appear to provide a better analogy for the dynamics of protein crystals than perfectly elastic lattices.
Nikolo, Martin; Zapf, Vivien S.; Singleton, John; ...
2016-07-22
Vortex dynamics and nonlinear ac response are studied in a Ba(Fe 0.94Ni 0.06) 2As 2( T c= 18.5 K) bulk superconductor in magnetic fields up to 12 T via ac susceptibility measurements of the first ten harmonics. A comprehensive study of the ac magnetic susceptibility and its first ten harmonics finds shifts to higher temperatures with increasing ac measurement frequencies (10 to 10,000 Hz) for a wide range of ac (1, 5, and 10 Oe) and dc fields (0 to 12 T). The characteristic measurement time constant t1 is extracted from the exponential fit of the data and linked tomore » vortex relaxation. The Anderson-Kim Arrhenius law is applied to determine flux activation energy E a/k as a function dc magnetic field. The de-pinning, or irreversibility lines, were determined by a variety of methods and extensively mapped. The ac response shows surprisingly weak higher harmonic components, suggesting weak nonlinear behavior. Lastly, our data does not support the Fisher model; we do not see an abrupt vortex glass to vortex liquid transition and the resistivity does not drop to zero, although it appears to approach zero exponentially.« less
NASA Astrophysics Data System (ADS)
Starn, J. J.; Belitz, K.; Carlson, C.
2017-12-01
Groundwater residence-time distributions (RTDs) are critical for assessing susceptibility of water resources to contamination. This novel approach for estimating regional RTDs was to first simulate groundwater flow using existing regional digital data sets in 13 intermediate size watersheds (each an average of 7,000 square kilometers) that are representative of a wide range of glacial systems. RTDs were simulated with particle tracking. We refer to these models as "general models" because they are based on regional, as opposed to site-specific, digital data. Parametric RTDs were created from particle RTDs by fitting 1- and 2-component Weibull, gamma, and inverse Gaussian distributions, thus reducing a large number of particle travel times to 3 to 7 parameters (shape, location, and scale for each component plus a mixing fraction) for each modeled area. The scale parameter of these distributions is related to the mean exponential age; the shape parameter controls departure from the ideal exponential distribution and is partly a function of interaction with bedrock and with drainage density. Given the flexible shape and mathematical similarity of these distributions, any of them are potentially a good fit to particle RTDs. The 1-component gamma distribution provided a good fit to basin-wide particle RTDs. RTDs at monitoring wells and streams often have more complicated shapes than basin-wide RTDs, caused in part by heterogeneity in the model, and generally require 2-component distributions. A machine learning model was trained on the RTD parameters using features derived from regionally available watershed characteristics such as recharge rate, material thickness, and stream density. RTDs appeared to vary systematically across the landscape in relation to watershed features. This relation was used to produce maps of useful metrics with respect to risk-based thresholds, such as the time to first exceedance, time to maximum concentration, time above the threshold (exposure time), and the time until last exceedance; thus, the parameters of groundwater residence time are measures of the intrinsic susceptibility of groundwater to contamination.
M. Riesz-Schur-type inequalities for entire functions of exponential type
NASA Astrophysics Data System (ADS)
Ganzburg, M. I.; Nevai, P.; Erdélyi, T.
2015-01-01
We prove a general M. Riesz-Schur-type inequality for entire functions of exponential type. If f and Q are two functions of exponential types σ > 0 and τ ≥ 0, respectively, and if Q is real-valued and the real zeros of Q, not counting multiplicities, are bounded away from each other, then \\displaystyle \\vert f(x)\\vert≤ (σ+τ) (Aσ+τ(Q))-1/2\\Vert Q f\\Vert C( R),\\qquad x\\in R, where \\displaystyle A_s(Q) \\stackrel{{def}}{=}\\infx\\in R \\bigl( \\lbrack Q'(x) \\rbrack ^2+s2 [Q(x)]^2\\bigr). We apply this inequality to the weights Q(x)\\stackrel{{def}}{=} \\sin (τ x) and Q(x) \\stackrel{{def}}{=} x and describe the extremal functions in the corresponding inequalities. Bibliography: 7 titles.
Broadband Spectral Investigations of Magnetar Bursts
NASA Astrophysics Data System (ADS)
Kırmızıbayrak, Demet; Şaşmaz Muş, Sinem; Kaneko, Yuki; Göğüş, Ersin
2017-09-01
We present our broadband (2-250 keV) time-averaged spectral analysis of 388 bursts from SGR J1550-5418, SGR 1900+14, and SGR 1806-20 detected with the Rossi X-ray Timing Explorer (RXTE) here and as a database in a companion web-catalog. We find that two blackbody functions (BB+BB), the sum of two modified blackbody functions (LB+LB), the sum of a blackbody function and a power-law function (BB+PO), and a power law with a high-energy exponential cutoff (COMPT) all provide acceptable fits at similar levels. We performed numerical simulations to constrain the best fitting model for each burst spectrum and found that 67.6% of burst spectra with well-constrained parameters are better described by the Comptonized model. We also found that 64.7% of these burst spectra are better described with the LB+LB model, which is employed in the spectral analysis of a soft gamma repeater (SGR) for the first time here, than with the BB+BB and BB+PO models. We found a significant positive lower bound trend on photon index, suggesting a decreasing upper bound on hardness, with respect to total flux and fluence. We compare this result with bursts observed from SGR and AXP (anomalous X-ray pulsar) sources and suggest that the relationship is a distinctive characteristic between the two. We confirm a significant anticorrelation between burst emission area and blackbody temperature, and find that it varies between the hot and cool blackbody temperatures differently than previously discussed. We expand on the interpretation of our results in the framework of a strongly magnetized neutron star.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kırmızıbayrak, Demet; Şaşmaz Muş, Sinem; Kaneko, Yuki
We present our broadband (2–250 keV) time-averaged spectral analysis of 388 bursts from SGR J1550−5418, SGR 1900+14, and SGR 1806−20 detected with the Rossi X-ray Timing Explorer ( RXTE ) here and as a database in a companion web-catalog. We find that two blackbody functions (BB+BB), the sum of two modified blackbody functions (LB+LB), the sum of a blackbody function and a power-law function (BB+PO), and a power law with a high-energy exponential cutoff (COMPT) all provide acceptable fits at similar levels. We performed numerical simulations to constrain the best fitting model for each burst spectrum and found that 67.6%more » of burst spectra with well-constrained parameters are better described by the Comptonized model. We also found that 64.7% of these burst spectra are better described with the LB+LB model, which is employed in the spectral analysis of a soft gamma repeater (SGR) for the first time here, than with the BB+BB and BB+PO models. We found a significant positive lower bound trend on photon index, suggesting a decreasing upper bound on hardness, with respect to total flux and fluence. We compare this result with bursts observed from SGR and AXP (anomalous X-ray pulsar) sources and suggest that the relationship is a distinctive characteristic between the two. We confirm a significant anticorrelation between burst emission area and blackbody temperature, and find that it varies between the hot and cool blackbody temperatures differently than previously discussed. We expand on the interpretation of our results in the framework of a strongly magnetized neutron star.« less
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N
2017-01-01
Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
NASA Astrophysics Data System (ADS)
Abdo, A. A.; Abeysekara, U.; Allen, B. T.; Aune, T.; Berley, D.; Bonamente, E.; Christopher, G. E.; DeYoung, T.; Dingus, B. L.; Ellsworth, R. W.; Galbraith-Frew, J. G.; Gonzalez, M. M.; Goodman, J. A.; Hoffman, C. M.; Hüntemeyer, P. H.; Hui, C. M.; Kolterman, B. E.; Linnemann, J. T.; McEnery, J. E.; Mincer, A. I.; Morgan, T.; Nemethy, P.; Pretz, J.; Ryan, J. M.; Saz Parkinson, P. M.; Shoup, A.; Sinnis, G.; Smith, A. J.; Vasileiou, V.; Walker, G. P.; Williams, D. A.; Yodh, G. B.
2012-07-01
The Cygnus region is a very bright and complex portion of the TeV sky, host to unidentified sources and a diffuse excess with respect to conventional cosmic-ray propagation models. Two of the brightest TeV sources, MGRO J2019+37 and MGRO J2031+41, are analyzed using Milagro data with a new technique, and their emission is tested under two different spectral assumptions: a power law and a power law with an exponential cutoff. The new analysis technique is based on an energy estimator that uses the fraction of photomultiplier tubes in the observatory that detect the extensive air shower. The photon spectrum is measured in the range 1-100 TeV using the last three years of Milagro data (2005-2008), with the detector in its final configuration. An F-test indicates that MGRO J2019+37 is better fit by a power law with an exponential cutoff than by a simple power law. The best-fitting parameters for the power law with exponential cutoff model are a normalization at 10 TeV of 7+5 -2 × 10-10 s-1 m-2 TeV-1, a spectral index of 2.0+0.5 -1.0, and a cutoff energy of 29+50 -16 TeV. MGRO J2031+41 shows no evidence of a cutoff. The best-fitting parameters for a power law are a normalization of 2.1+0.6 -0.6 × 10-10 s-1 m-2 TeV-1 and a spectral index of 3.22+0.23 -0.18. The overall flux is subject to a ~30% systematic uncertainty. The systematic uncertainty on the power-law indices is ~0.1. Both uncertainties have been verified with cosmic-ray data. A comparison with previous results from TeV J2032+4130, MGRO J2031+41, and MGRO J2019+37 is also presented.
Non-additive non-interacting kinetic energy of rare gas dimers
NASA Astrophysics Data System (ADS)
Jiang, Kaili; Nafziger, Jonathan; Wasserman, Adam
2018-03-01
Approximations of the non-additive non-interacting kinetic energy (NAKE) as an explicit functional of the density are the basis of several electronic structure methods that provide improved computational efficiency over standard Kohn-Sham calculations. However, within most fragment-based formalisms, there is no unique exact NAKE, making it difficult to develop general, robust approximations for it. When adjustments are made to the embedding formalisms to guarantee uniqueness, approximate functionals may be more meaningfully compared to the exact unique NAKE. We use numerically accurate inversions to study the exact NAKE of several rare-gas dimers within partition density functional theory, a method that provides the uniqueness for the exact NAKE. We find that the NAKE decreases nearly exponentially with atomic separation for the rare-gas dimers. We compute the logarithmic derivative of the NAKE with respect to the bond length for our numerically accurate inversions as well as for several approximate NAKE functionals. We show that standard approximate NAKE functionals do not reproduce the correct behavior for this logarithmic derivative and propose two new NAKE functionals that do. The first of these is based on a re-parametrization of a conjoint Perdew-Burke-Ernzerhof (PBE) functional. The second is a simple, physically motivated non-decomposable NAKE functional that matches the asymptotic decay constant without fitting.
A statistical study of decaying kink oscillations detected using SDO/AIA
NASA Astrophysics Data System (ADS)
Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.
2016-01-01
Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
On the frequency dependence of the otoacoustic emission latency in hypoacoustic and normal ears
NASA Astrophysics Data System (ADS)
Sisto, R.; Moleti, A.
2002-01-01
Experimental measurements of the otoacoustic emission (OAE) latency of adult subjects have been obtained, as a function of frequency, by means of wavelet time-frequency analysis based on the iterative application of filter banks. The results are in agreement with previous OAE latency measurements by Tognola et al. [Hear. Res. 106, 112-122 (1997)], as regards both the latency values and the frequency dependence, and seem to be incompatible with the steep 1/f law that is predicted by scale-invariant full cochlear models. The latency-frequency relationship has been best fitted to a linear function of the cochlear physical distance, using the Greenwood map, and to an exponential function of the cochlear distance, for comparison with derived band ABR latency measurements. Two sets of ears [94 audiometrically normal and 42 impaired with high-frequency (f>3 kHz) hearing loss] have been separately analyzed. Significantly larger average latencies were found in the impaired ears in the mid-frequency range. Theoretical implications of these findings on the transmission of the traveling wave are discussed.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
We continue our investigation of overcoming Gibbs phenomenon, i.e., to obtain exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. We show that if we are given the first N Gegenbauer expansion coefficients, based on the Gegenbauer polynomials C(sub k)(sup mu)(x) with the weight function (1 - x(exp 2))(exp mu - 1/2) for any constant mu is greater than or equal to 0, of an L(sub 1) function f(x), we can construct an exponentially convergent approximation to the point values of f(x) in any subinterval in which the function is analytic. The proof covers the cases of Chebyshev or Legendre partial sums, which are most common in applications.
Modeling and prediction of relaxation of polar order in high-activity nonlinear optical polymers
NASA Astrophysics Data System (ADS)
Guenthner, Andrew J.; Lindsay, Geoffrey A.; Wright, Michael E.; Fallis, Stephen; Ashley, Paul R.; Sanghadasa, Mohan
2007-09-01
Mach-Zehnder optical modulators were fabricated using the CLD and FTC chromophores in polymer-on-silicon optical waveguides. Up to 17 months of oven-ageing stability are reported for the poled polymer films. Modulators containing an FTC-polyimide had the best over all aging performance. To model and extrapolate the ageing data, a relaxation correlation function attributed to A. K. Jonscher was compared to the well-established stretched exponential correlation function. Both models gave a good fit to the data. The Jonscher model predicted a slower relaxation rate in the out years. Analysis showed that collecting data for a longer period relative to the relaxation time was more important for generating useful predictions than the precision with which individual model parameters could be estimated. Thus from a practical standpoint, time-temperature superposition must be assumed in order to generate meaningful predictions. For this purpose, Arrhenius-type expressions were found to relate the model time constants to the ageing temperatures.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
Diffusive dynamics of nanoparticles in ultra-confined media
Jacob, Jack Deodato; Conrad, Jacinta; Krishnamoorti, Ramanan; ...
2015-08-10
Differential dynamic microscopy (DDM) was used to investigate the diffusive dynamics of nanoparticles of diameter 200 400 nm that were strongly confined in a periodic square array of cylindrical nanoposts. The minimum distance between posts was 1.3 5 times the diameter of the nanoparticles. The image structure functions obtained from the DDM analysis were isotropic and could be fit by a stretched exponential function. The relaxation time scaled diffusively across the range of wave vectors studied, and the corresponding scalar diffusivities decreased monotonically with increased confinement. The decrease in diffusivity could be described by models for hindered diffusion that accountedmore » for steric restrictions and hydrodynamic interactions. The stretching exponent decreased linearly as the nanoparticles were increasingly confined by the posts. Altogether, these results are consistent with a picture in which strongly confined nanoparticles experience a heterogeneous spatial environment arising from hydrodynamics and volume exclusion on time scales comparable to cage escape, leading to multiple relaxation processes and Fickian but non-Gaussian diffusive dynamics.« less
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Distribution of lifetimes for coronal soft X-ray bright points
NASA Technical Reports Server (NTRS)
Golub, L.; Krieger, A. S.; Vaiana, G. S.
1976-01-01
The lifetime 'spectrum' of X-ray bright points (XBPs) is measured for a sample of 300 such features using soft X-ray images obtained with the S-054 X-ray spectrographic telescope aboard Skylab. 'Spectrum' here is defined as a function which gives the relative number of XBPs having a specific lifetime as a function of lifetime. The results indicate that a two-lifetime exponential can be fit to the decay curves of XBPs, that the spectrum is heavily weighted toward short lifetimes, and that the number of features lasting 20 to 30 hr or more is greater than expected. A short-lived component with an average lifetime of about 8 hr and a long-lived 1.5-day component are consistently found along with a few features lasting 50 hr or more. An examination of differences among the components shows that features lasting 2 days or less have a broad heliocentric-latitude distribution while nearly all the longer-lived features are observed within 30 deg of the solar equator.
Analytical and numerical solutions for mass diffusion in a composite cylindrical body
NASA Astrophysics Data System (ADS)
Kumar, A.
1980-12-01
The analytical and numerical solution techniques were investigated to study moisture diffusion problems in cylindrical bodies that are assumed to be composed of a finite number of layers of different materials. A generalized diffusion model for an n-layer cylindrical body with discontinuous moisture content at the interfaces was developed and the formal solutions were obtained. The model is to be used for describing mass transfer rates of any composite body, such as an ear of corn which could be assumed of consisting two different layers: the inner core represents the woody cob and the outer cylinder represents the kernel layer. Data describing the fully exposed drying characteristics of ear corn at high air velocity were obtained under different drying conditions. Ear corns were modeled as homogeneous bodies since composite model did not improve the fit substantially. A computer program using multidimensional optimization technique showed that diffusivity was an exponential function of moisture content and an arrhenius function of temperature of drying air.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Feng, Zhaoyan; Min, Xiangde; Margolis, Daniel J. A.; Duan, Caohui; Chen, Yuping; Sah, Vivek Kumar; Chaudhary, Nabin; Li, Basen; Ke, Zan; Zhang, Peipei; Wang, Liang
2017-01-01
Objectives To evaluate the diagnostic performance of different mathematical models and different b-value ranges of diffusion-weighted imaging (DWI) in peripheral zone prostate cancer (PZ PCa) detection. Methods Fifty-six patients with histologically proven PZ PCa who underwent DWI-magnetic resonance imaging (MRI) using 21 b-values (0–4500 s/mm2) were included. The mean signal intensities of the regions of interest (ROIs) placed in benign PZs and cancerous tissues on DWI images were fitted using mono-exponential, bi-exponential, stretched-exponential, and kurtosis models. The b-values were divided into four ranges: 0–1000, 0–2000, 0–3200, and 0–4500 s/mm2, grouped as A, B, C, and D, respectively. ADC,
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
ERIC Educational Resources Information Center
Caglayan, Günhan
2014-01-01
This study investigates prospective secondary mathematics teachers' visual representations of polynomial and rational inequalities, and graphs of exponential and logarithmic functions with GeoGebra Dynamic Software. Five prospective teachers in a university in the United States participated in this research study, which was situated within a…
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
New exponential stability criteria for stochastic BAM neural networks with impulses
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Samidurai, R.; Anthoni, S. M.
2010-10-01
In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
Optical study of HgCdTe infrared photodetectors using internal photoemission spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lao, Yan-Feng; Unil Perera, A. G., E-mail: uperera@gsu.edu; Wijewarnasuriya, Priyalal S.
2014-03-31
We report a study of internal photoemission spectroscopy (IPE) applied to a n-type Hg{sub 1−x}Cd{sub x}Te/Hg{sub 1−y}Cd{sub y}Te heterojunction. An exponential line-shape of the absorption tail in HgCdTe is identified by IPE fittings of the near-threshold quantum yield spectra. The reduction of quantum yield (at higher photon energy) below the fitting value is explained as a result of carrier-phonon scatterings. In addition, the obtained bias independence of the IPE threshold indicates a negligible electron barrier at the heterojunction interface.
Fitting ERGMs on big networks.
An, Weihua
2016-09-01
The exponential random graph model (ERGM) has become a valuable tool for modeling social networks. In particular, ERGM provides great flexibility to account for both covariates effects on tie formations and endogenous network formation processes. However, there are both conceptual and computational issues for fitting ERGMs on big networks. This paper describes a framework and a series of methods (based on existent algorithms) to address these issues. It also outlines the advantages and disadvantages of the methods and the conditions to which they are most applicable. Selected methods are illustrated through examples. Copyright © 2016 Elsevier Inc. All rights reserved.
Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routine
NASA Astrophysics Data System (ADS)
Wang, Jun; Endreny, Theodore A.; Hassett, James M.
2006-11-01
TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4.8 cm h-1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright
2008-08-01
the distribution of DNAPL. The OSU research team evaluated the use of radon as a partitioning groundwater tracer. The DNAPL release fulfilled one...close to the source area generated more PCE equivalent mass over time. The exponential decay from the fitted line (predicted PCE, orange line in each
Decomposition rates for hand-piled fuels
Clinton S. Wright; Alexander M. Evans; Joseph C. Restaino
2017-01-01
Hand-constructed piles in eastern Washington and north-central New Mexico were weighed periodically between October 2011 and June 2015 to develop decay-rate constants that are useful for estimating the rate of piled biomass loss over time. Decay-rate constants (k) were determined by fitting negative exponential curves to time series of pile weight for each site. Piles...
NASA Astrophysics Data System (ADS)
Kunz, Christopher Lee
The mid-rapidity charged kaon ratios and yields are reported for the 200 AGeV Au+Au, 130 AGeV Au+Au, and 200 GeV pp data sets. The K -/K+ ratios are shown to be flat as a function of rapidity, transverse momentum, and centrality for the ranges investigated. The integrated ratios are 0.928 +/- 0.0028 (stat.) +/- 0.03 (sys.), 0.953 +/- 0.0.0012 (stat.) +/- 0.01 (sys.), and 0.964 +/- 0.0039 (stat.) +/- 0.01 (sys.) for 130 AGeV Au+Au, 200 AGeV Au+Au, and 200 GeV pp respectively. Thermal fits are applied to the ratios to extract the baryo-chemical potential and chemical freeze-out temperature. The baryo-chemical potential, as well as the kaon ratio, suggest that the net-baryon density at mid-rapidity is approaching zero at RHIC energies. A quark coalescence model suggests quark degrees of freedom are important in the formation of the ratios. The corrected yields are fit with an exponential in mt and the dN/dy and inverse slope parameter are extracted. The inverse slope parameter is used along with the average collective flow velocity in a simple relationship to extract the thermal freeze-out temperature. A more sophisticated hydrodynamically motivated fit, using pion, kaon, and proton data, shows agreement with the trend from this simple relationship.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Beelders, Theresa; de Beer, Dalene; Kidd, Martin; Joubert, Elizabeth
2018-01-01
Mangiferin, a C-glucosyl xanthone, abundant in mango and honeybush, is increasingly targeted for its bioactive properties and thus to enhance functional properties of food. The thermal degradation kinetics of mangiferin at pH3, 4, 5, 6 and 7 were each modeled at five temperatures ranging between 60 and 140°C. First-order reaction models were fitted to the data using non-linear regression to determine the reaction rate constant at each pH-temperature combination. The reaction rate constant increased with increasing temperature and pH. Comparison of the reaction rate constants at 100°C revealed an exponential relationship between the reaction rate constant and pH. The data for each pH were also modeled with the Arrhenius equation using non-linear and linear regression to determine the activation energy and pre-exponential factor. Activation energies decreased slightly with increasing pH. Finally, a multi-linear model taking into account both temperature and pH was developed for mangiferin degradation. Sterilization (121°C for 4min) of honeybush extracts dissolved at pH4, 5 and 7 did not cause noticeable degradation of mangiferin, although the multi-linear model predicted 34% degradation at pH7. The extract matrix is postulated to exert a protective effect as changes in potential precursor content could not fully explain the stability of mangiferin. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling the expenditure and reconstitution of work capacity above critical power.
Skiba, Philip Friere; Chidnok, Weerapong; Vanhatalo, Anni; Jones, Andrew M
2012-08-01
The critical power (CP) model includes two constants: the CP and the W' [P = (W' / t) + CP]. The W' is the finite work capacity available above CP. Power output above CP results in depletion of the W' complete depletion of the W' results in exhaustion. Monitoring the W' may be valuable to athletes during training and competition. Our purpose was to develop a function describing the dynamic state of the W' during intermittent exercise. After determination of V˙O(2max), CP, and W', seven subjects completed four separate exercise tests on a cycle ergometer on different days. Each protocol comprised a set of intervals: 60 s at a severe power output, followed by 30-s recovery at a lower prescribed power output. The intervals were repeated until exhaustion. These data were entered into a continuous equation predicting balance of W' remaining, assuming exponential reconstitution of the W'. The time constant was varied by an iterative process until the remaining modeled W' = 0 at the point of exhaustion. The time constants of W' recharge were negatively correlated with the difference between sub-CP recovery power and CP. The relationship was best fit by an exponential (r = 0.77). The model-predicted W' balance correlated with the temporal course of the rise in V˙O(2) (r = 0.82-0.96). The model accurately predicted exhaustion of the W' in a competitive cyclist during a road race. We have developed a function to track the dynamic state of the W' during intermittent exercise. This may have important implications for the planning and real-time monitoring of athletic performance.
Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian
2012-08-01
We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.
Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng
2013-01-01
Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal muscle and liver computed by the Patlak analysis using estimated and measured plasma TACs were in excellent agreement (slope ~ 1; R2 > 0.938). The IF estimated using only the last blood sample acquired at the end of the study and the use of liver TAC as plasma IF provided less reliable results. Conclusions The estimated plasma IFs obtained with the hybrid model agreed well with those derived from arterial blood sampling. Importantly, the proposed method obviates the need of arterial catheterization, making it possible to perform repeated dynamic [18F]FDG PET studies on the same animal. Liver TAC is unsuitable as an input function for absolute quantification of [18F]FDG PET data. PMID:23322346
Age dependency of base modification in rabbit liver DNA
NASA Technical Reports Server (NTRS)
Yamamoto, O.; Fuji, I.; Yoshida, T.; Cox, A. B.; Lett, J. T.
1988-01-01
Age-related modifications of DNA bases have been observed in the liver of the New Zealand white (NZW) rabbit (Oryctolagus cuniculus), a lagomorph with a median life span in captivity of 5-7 yr. The ages of the animals studied ranged from 6 wk to 9 yr. After the DNA had been extracted from the liver cell nuclei and hydrolyzed with acid, the bases were analyzed by column chromatography with Cellulofine gels (GC-15-m). Two peaks in the chromatogram, which eluted before the four DNA bases, contained modified bases. Those materials, which were obtained in relatively large amounts from old animals, were highly fluorescent, and were shown to be crosslinked base products by mass spectrometry. The yield of crosslinked products versus rabbit age (greater than 0.5 yr) can be fitted by an exponential function (correlation coefficient: 0.76 +/- 0.09).
Zeng, Fan-Gang
2004-01-01
More than 60,000 people worldwide use cochlear implants as a means to restore functional hearing. Although individual performance variability is still high, an average implant user can talk on the phone in a quiet environment. Cochlear-implant research has also matured as a field, as evidenced by the exponential growth in both the patient population and scientific publication. The present report examines current issues related to audiologic, clinical, engineering, anatomic, and physiologic aspects of cochlear implants, focusing on their psychophysical, speech, music, and cognitive performance. This report also forecasts clinical and research trends related to presurgical evaluation, fitting protocols, signal processing, and postsurgical rehabilitation in cochlear implants. Finally, a future landscape in amplification is presented that requires a unique, yet complementary, contribution from hearing aids, middle ear implants, and cochlear implants to achieve a total solution to the entire spectrum of hearing loss treatment and management.
Dielectric and AC conductivity studies on SrBi4Ti4O15
NASA Astrophysics Data System (ADS)
Jose, Roshan; Saravanan, K. Venkata
2018-05-01
The four layered SrBi4Ti4O15 ceramics which belong to the aurivillius family of oxide was prepared by conventional solid state reaction technique. Analysis of the dielectric data as a function of temperature and frequency revealed normal phase transition. The frequency dependent ac conductivity follows Jonscher's universal power law. Frequency exponent (n), pre-exponential factor (A), bulk dc conductivity (σdc), and hopping frequency (ωp) were determined from the fitting curves. The variation of frequency exponent with temperature indicates that large polaron hopping mechanism up to curie-temperature, then its changes to small polaron hopping. The activation energies were calculated from ac conductivity, bulk dc conductivity and hopping frequency. The activation energies revealed that conductivity had contributions from migrations of oxygen vacancies, bismuth ion vacancies and strontium ion vacancies.
Milky Way Mass Models and MOND
NASA Astrophysics Data System (ADS)
McGaugh, Stacy S.
2008-08-01
Using the Tuorla-Heidelberg model for the mass distribution of the Milky Way, I determine the rotation curve predicted by MOND (modified Newtonian dynamics). The result is in good agreement with the observed terminal velocities interior to the solar radius and with estimates of the Galaxy's rotation curve exterior thereto. There are no fit parameters: given the mass distribution, MOND provides a good match to the rotation curve. The Tuorla-Heidelberg model does allow for a variety of exponential scale lengths; MOND prefers short scale lengths in the range 2.0 kpc lesssim Rdlesssim 2.5 kpc. The favored value of Rd depends somewhat on the choice of interpolation function. There is some preference for the "simple" interpolation function as found by Famaey & Binney. I introduce an interpolation function that shares the advantages of the simple function on galaxy scales while having a much smaller impact in the solar system. I also solve the inverse problem, inferring the surface mass density distribution of the Milky Way from the terminal velocities. The result is a Galaxy with "bumps and wiggles" in both its luminosity profile and rotation curve that are reminiscent of those frequently observed in external galaxies.
Demers, Hendrix; Ramachandra, Ranjan; Drouin, Dominique; de Jonge, Niels
2012-01-01
Lateral profiles of the electron probe of scanning transmission electron microscopy (STEM) were simulated at different vertical positions in a micrometers-thick carbon sample. The simulations were carried out using the Monte Carlo method in the CASINO software. A model was developed to fit the probe profiles. The model consisted of the sum of a Gaussian function describing the central peak of the profile, and two exponential decay functions describing the tail of the profile. Calculations were performed to investigate the fraction of unscattered electrons as function of the vertical position of the probe in the sample. Line scans were also simulated over gold nanoparticles at the bottom of a carbon film to calculate the achievable resolution as function of the sample thickness and the number of electrons. The resolution was shown to be noise limited for film thicknesses less than 1 μm. Probe broadening limited the resolution for thicker films. The validity of the simulation method was verified by comparing simulated data with experimental data. The simulation method can be used as quantitative method to predict STEM performance or to interpret STEM images of thick specimens. PMID:22564444
Exponentially accurate approximations to piece-wise smooth periodic functions
NASA Technical Reports Server (NTRS)
Greer, James; Banerjee, Saheb
1995-01-01
A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.
Connock, Martin; Hyde, Chris; Moore, David
2011-10-01
The UK National Institute for Health and Clinical Excellence (NICE) has used its Single Technology Appraisal (STA) programme to assess several drugs for cancer. Typically, the evidence submitted by the manufacturer comes from one short-term randomized controlled trial (RCT) demonstrating improvement in overall survival and/or in delay of disease progression, and these are the pre-eminent drivers of cost effectiveness. We draw attention to key issues encountered in assessing the quality and rigour of the manufacturers' modelling of overall survival and disease progression. Our examples are two recent STAs: sorafenib (Nexavar®) for advanced hepatocellular carcinoma, and azacitidine (Vidaza®) for higher-risk myelodysplastic syndromes (MDS). The choice of parametric model had a large effect on the predicted treatment-dependent survival gain. Logarithmic models (log-Normal and log-logistic) delivered double the survival advantage that was derived from Weibull models. Both submissions selected the logarithmic fits for their base-case economic analyses and justified selection solely on Akaike Information Criterion (AIC) scores. AIC scores in the azacitidine submission failed to match the choice of the log-logistic over Weibull or exponential models, and the modelled survival in the intervention arm lacked face validity. AIC scores for sorafenib models favoured log-Normal fits; however, since there is no statistical method for comparing AIC scores, and differences may be trivial, it is generally advised that the plausibility of competing models should be tested against external data and explored in diagnostic plots. Function fitting to observed data should not be a mechanical process validated by a single crude indicator (AIC). Projective models should show clear plausibility for the patients concerned and should be consistent with other published information. Multiple rather than single parametric functions should be explored and tested with diagnostic plots. When trials have survival curves with long tails exhibiting few events then the robustness of extrapolations using information in such tails should be tested.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
NASA Astrophysics Data System (ADS)
D'Onofrio, M.
2001-10-01
In this paper we analyse the results of the two-dimensional (2D) fit of the light distribution of 73 early-type galaxies belonging to the Virgo and Fornax clusters, a sample volume- and magnitude-limited down to MB=-17.3, and highly homogeneous. In our previous paper (Paper I) we have presented the adopted 2D models of the surface-brightness distribution - namely the r1/n and (r1/n+exp) models - we have discussed the main sources of error affecting the structural parameters, and we have tested the ability of the chosen minimization algorithm (MINUIT) in determining the fitting parameters using a sample of artificial galaxies. We show that, with the exception of 11 low-luminosity E galaxies, the best fit of the real galaxy sample is always achieved with the two-component (r1/n+exp) model. The improvement in the χ2 due to the addition of the exponential component is found to be statistically significant. The best fit is obtained with the exponent n of the generalized r1/n Sersic law different from the classical de Vaucouleurs value of 4. Nearly 42 per cent of the sample have n<2, suggesting the presence of exponential `bulges' also in early-type galaxies. 20 luminous E galaxies are fitted by the two-component model, with a small central exponential structure (`disc') and an outer big spheroid with n>4. We believe that this is probably due to their resolved core. The resulting scalelengths Rh and Re of each component peak approximately at ~1 and ~2kpc, respectively, although with different variances in their distributions. The ratio Re/Rh peaks at ~0.5, a value typical for normal lenticular galaxies. The first component, represented by the r1/n law, is probably made of two distinct families, `ordinary' and `bright', on the basis of their distribution in the μe-log(Re) plane, a result already suggested by Capaccioli, Caon and D'Onofrio. The bulges of spirals and S0 galaxies belong to the `ordinary' family, while the large spheroids of luminous E galaxies form the `bright' family. The second component, represented by the exponential law, also shows a wide distribution in the μ0c-log(Rh) plane. Small discs (or cores) have short scalelengths and high central surface brightness, while normal lenticulars and spiral galaxies generally have scalelengths higher than 0.5kpc and central surface brightness brighter than 20magarcsec-2 (in the B band). The scalelengths Re and Rh of the `bulge' and `disc' components are probably correlated, indicating that a self-regulating mechanism of galaxy formation may be at work. Alternatively, two regions of the Re-Rh plane are avoided by galaxies due to dynamical instability effects. The bulge-to-disc (B/D) ratio seems to vary uniformly along the Hubble sequence, going from late-type spirals to E galaxies. At the end of the sequence the ratio between the large spheroidal component and the small inner core can reach B/D~100.
Aggarwal, Ankush
2017-08-01
Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Lan; Liu Chao; Zhao Gang
To constrain the Galactic gravitational potential near the Sun ({approx}1.5 kpc), we derive and model the spatial and velocity distributions for a sample of 9000 K-dwarfs with spectra from SDSS/SEGUE, which yield radial velocities and abundances ([Fe/H] and [{alpha}/Fe]). We first derive the spatial density distribution for three abundance-selected sub-populations of stars accounting for the survey's selection function. The vertical profiles of these sub-populations are simple exponentials and their vertical dispersion profile is nearly isothermal. To model these data, we apply the 'vertical' Jeans equation, which relates the observable tracer number density and vertical velocity dispersion to the gravitational potentialmore » or vertical force. We explore a number of functional forms for the vertical force law, fit the dispersion and density profiles of all abundance-selected sub-populations simultaneously in the same potential, and explore all parameter co-variances using a Markov Chain Monte Carlo technique. Our fits constrain a disk mass scale height {approx}< 300 pc and the total surface mass density to be 67 {+-} 6 M{sub Sun} pc{sup -2} at |z| = 1.0 kpc of which the contribution from all stars is 42 {+-} 5 M{sub Sun} pc{sup -2} (assuming a contribution from cold gas of 13 M{sub Sun} pc{sup -2}). We find significant constraints on the local dark matter density of 0.0065 {+-} 0.0023 M{sub Sun} pc{sup -3} (0.25 {+-} 0.09 GeV cm{sup -3}). Together with recent experiments this firms up the best estimate of 0.0075 {+-} 0.0021 M{sub Sun} pc{sup -3} (0.28 {+-} 0.08 GeV cm{sup -3}), consistent with global fits of approximately round dark matter halos to kinematic data in the outskirts of the Galaxy.« less
de Melo, C M R; Packer, I U; Costa, C N; Machado, P F
2007-03-01
Covariance components for test day milk yield using 263 390 first lactation records of 32 448 Holstein cows were estimated using random regression animal models by restricted maximum likelihood. Three functions were used to adjust the lactation curve: the five-parameter logarithmic Ali and Schaeffer function (AS), the three-parameter exponential Wilmink function in its standard form (W) and in a modified form (W*), by reducing the range of covariate, and the combination of Legendre polynomial and W (LEG+W). Heterogeneous residual variance (RV) for different classes (4 and 29) of days in milk was considered in adjusting the functions. Estimates of RV were quite similar, rating from 4.15 to 5.29 kg2. Heritability estimates for AS (0.29 to 0.42), LEG+W (0.28 to 0.42) and W* (0.33 to 0.40) were similar, but heritability estimates used W (0.25 to 0.65) were highest than those estimated by the other functions, particularly at the end of lactation. Genetic correlations between milk yield on consecutive test days were close to unity, but decreased as the interval between test days increased. The AS function with homogeneous RV model had the best fit among those evaluated.
NASA Technical Reports Server (NTRS)
Revenaugh, Justin; Parsons, Barry
1987-01-01
Adopting the formalism of Parsons and Daly (1983), analytical integral equations (Green's function integrals) are derived which relate gravity anomalies and dynamic boundary topography with temperature as a function of wavenumber for a fluid layer whose viscosity varies exponentially with depth. In the earth, such a viscosity profile may be found in the asthenosphere, where the large thermal gradient leads to exponential decrease of viscosity with depth, the effects of a pressure increase being small in comparison. It is shown that, when viscosity varies rapidly, topography kernels for both the surface and bottom boundaries (and hence the gravity kernel) are strongly affected at all wavelengths.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan
2017-01-01
Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629
On the Existence of Step-To-Step Breakpoint Transitions in Accelerated Sprinting
McGhie, David; Danielsen, Jørgen; Sandbakk, Øyvind; Haugen, Thomas
2016-01-01
Accelerated running is characterised by a continuous change of kinematics from one step to the next. It has been argued that breakpoints in the step-to-step transitions may occur, and that these breakpoints are an essential characteristic of dynamics during accelerated running. We examined this notion by comparing a continuous exponential curve fit (indicating continuity, i.e., smooth transitions) with linear piecewise fitting (indicating breakpoint). We recorded the kinematics of 24 well trained sprinters during a 25 m sprint run with start from competition starting blocks. Kinematic data were collected for 24 anatomical landmarks in 3D, and the location of centre of mass (CoM) was calculated from this data set. The step-to-step development of seven variables (four related to CoM position, and ground contact time, aerial time and step length) were analysed by curve fitting. In most individual sprints (in total, 41 sprints were successfully recorded) no breakpoints were identified for the variables investigated. However, for the mean results (i.e., the mean curve for all athletes) breakpoints were identified for the development of vertical CoM position, angle of acceleration and distance between support surface and CoM. It must be noted that for these variables the exponential fit showed high correlations (r2>0.99). No relationship was found between the occurrences of breakpoints for different variables as investigated using odds ratios (Mantel-Haenszel Chi-square statistic). It is concluded that although breakpoints regularly appear during accelerated running, these are not the rule and thereby unlikely a fundamental characteristic, but more likely an expression of imperfection of performance. PMID:27467387
Du, Q; Mezey, P G
1998-09-01
In this research we test and compare three possible atom-based screening functions used in the heuristic molecular lipophilicity potential (HMLP). Screening function 1 is a power distance-dependent function, bi/[formula: see text] Ri-r [formula: see text] gamma, screening function 2 is an exponential distance-dependent function, bi exp(-[formula: see text] Ri-r [formula: see text]/d0), and screening function 3 is a weighted distance-dependent function, sign(bi) exp[-xi [formula: see text] Ri-r [formula: see text]/magnitude of bi)]. For every screening function, the parameters (gamma, d0, and xi) are optimized using 41 common organic molecules of 4 types of compounds: aliphatic alcohols, aliphatic carboxylic acids, aliphatic amines, and aliphatic alkanes. The results of calculations show that screening function 3 cannot give chemically reasonable results, however, both the power screening function and the exponential screening function give chemically satisfactory results. There are two notable differences between screening functions 1 and 2. First, the exponential screening function has larger values in the short distance than the power screening function, therefore more influence from the nearest neighbors is involved using screening function 2 than screening function 1. Second, the power screening function has larger values in the long distance than the exponential screening function, therefore screening function 1 is effected by atoms at long distance more than screening function 2. For screening function 1, the suitable range of parameter gamma is 1.0 < gamma < 3.0, gamma = 2.3 is recommended, and gamma = 2.0 is the nearest integral value. For screening function 2, the suitable range of parameter d0 is 1.5 < d0 < 3.0, and d0 = 2.0 is recommended. HMLP developed in this research provides a potential tool for computer-aided three-dimensional drug design.
Interaction phenomenon to dimensionally reduced p-gBKP equation
NASA Astrophysics Data System (ADS)
Zhang, Runfa; Bilige, Sudao; Bai, Yuexing; Lü, Jianqing; Gao, Xiaoqing
2018-02-01
Based on searching the combining of quadratic function and exponential (or hyperbolic cosine) function from the Hirota bilinear form of the dimensionally reduced p-gBKP equation, eight class of interaction solutions are derived via symbolic computation with Mathematica. The submergence phenomenon, presented to illustrate the dynamical features concerning these obtained solutions, is observed by three-dimensional plots and density plots with particular choices of the involved parameters between the exponential (or hyperbolic cosine) function and the quadratic function. It is proved that the interference between the two solitary waves is inelastic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Bo; Kowalski, Karol
In this paper we derive basic properties of the Green’s function matrix elements stemming from the exponential coupled cluster (CC) parametrization of the ground-state wave function. We demon- strate that all intermediates used to express retarded (or equivalently, ionized) part of the Green’s function in the ω-representation can be expressed through connected diagrams only. Similar proper- ties are also shared by the first order ω-derivatives of the retarded part of the CC Green’s function. This property can be extended to any order ω-derivatives of the Green’s function. Through the Dyson equation of CC Green’s function, the derivatives of corresponding CCmore » self-energy can be evaluated analytically. In analogy to the CC Green’s function, the corresponding CC self-energy is expressed in terms of connected diagrams only. Moreover, the ionized part of the CC Green’s func- tion satisfies the non-homogeneous linear system of ordinary differential equations, whose solution may be represented in the exponential form. Our analysis can be easily generalized to the advanced part of the CC Green’s function.« less
Parameterization of photon beam dosimetry for a linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebron, Sharon; Barraclough, Brendan; Lu, Bo
2016-02-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, includingmore » percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 and 0.5 mm, respectively. The differences in the low gradient regions were 0.20% ± 0.20% (<1% for all) and 0.50% ± 0.35% (<1% for all) for PDDs and profiles, respectively. For S{sub cp} data, all of the absolute differences were less than 0.5%. Conclusions: This novel analytical model with minimum measurement requirements was proved to accurately calculate PDDs, profiles, and S{sub cp} for different field sizes, depths, and energies.« less
Empirical verification of evolutionary theories of aging.
Kyryakov, Pavlo; Gomez-Perez, Alejandra; Glebov, Anastasia; Asbah, Nimara; Bruno, Luigi; Meunier, Carolynne; Iouk, Tatiana; Titorenko, Vladimir I
2016-10-25
We recently selected 3 long-lived mutant strains of Saccharomyces cerevisiae by a lasting exposure to exogenous lithocholic acid. Each mutant strain can maintain the extended chronological lifespan after numerous passages in medium without lithocholic acid. In this study, we used these long-lived yeast mutants for empirical verification of evolutionary theories of aging. We provide evidence that the dominant polygenic trait extending longevity of each of these mutants 1) does not affect such key features of early-life fitness as the exponential growth rate, efficacy of post-exponential growth and fecundity; and 2) enhances such features of early-life fitness as susceptibility to chronic exogenous stresses, and the resistance to apoptotic and liponecrotic forms of programmed cell death. These findings validate evolutionary theories of programmed aging. We also demonstrate that under laboratory conditions that imitate the process of natural selection within an ecosystem, each of these long-lived mutant strains is forced out of the ecosystem by the parental wild-type strain exhibiting shorter lifespan. We therefore concluded that yeast cells have evolved some mechanisms for limiting their lifespan upon reaching a certain chronological age. These mechanisms drive the evolution of yeast longevity towards maintaining a finite yeast chronological lifespan within ecosystems.
Prony series spectra of structural relaxation in N-BK7 for finite element modeling.
Koontz, Erick; Blouin, Vincent; Wachtel, Peter; Musgraves, J David; Richardson, Kathleen
2012-12-20
Structural relaxation behavior of N-BK7 glass was characterized at temperatures 20 °C above and below T(12) for this glass, using a thermo mechanical analyzer (TMA). T(12) is a characteristic temperature corresponding to a viscosity of 10(12) Pa·s. The glass was subject to quick temperature down-jumps preceded and followed by long isothermal holds. The exponential-like decay of the sample height was recorded and fitted using a unique Prony series method. The result of his method was a plot of the fit parameters revealing the presence of four distinct peaks or distributions of relaxation times. The number of relaxation times decreased as final test temperature was increased. The relaxation times did not shift significantly with changing temperature; however, the Prony weight terms varied essentially linearly with temperature. It was also found that the structural relaxation behavior of the glass trended toward single exponential behavior at temperatures above the testing range. The result of the analysis was a temperature-dependent Prony series model that can be used in finite element modeling of glass behavior in processes such as precision glass molding (PGM).
Stress relaxation in quasi-two-dimensional self-assembled nanoparticle monolayers
NASA Astrophysics Data System (ADS)
Boucheron, Leandra S.; Stanley, Jacob T.; Dai, Yeling; You, Siheng Sean; Parzyck, Christopher T.; Narayanan, Suresh; Sandy, Alec R.; Jiang, Zhang; Meron, Mati; Lin, Binhua; Shpyrko, Oleg G.
2018-05-01
We experimentally probed the stress relaxation of a monolayer of iron oxide nanoparticles at the water-air interface. Upon drop-casting onto a water surface, the nanoparticles self-assembled into islands of two-dimensional hexagonally close packed crystalline domains surrounded by large voids. When compressed laterally, the voids gradually disappeared as the surface pressure increased. After the compression was stopped, the surface pressure (as measured by a Wilhelmy plate) evolved as a function of the film aging time with three distinct timescales. These aging dynamics were intrinsic to the stressed state built up during the non-equilibrium compression of the film. Utilizing x-ray photon correlation spectroscopy, we measured the characteristic relaxation time (τ ) of in-plane nanoparticle motion as a function of the aging time through both second-order and two-time autocorrelation analysis. Compressed and stretched exponential fitting of the intermediate scattering function yielded exponents (β ) indicating different relaxation mechanisms of the films under different compression stresses. For a monolayer compressed to a lower surface pressure (between 20 mN/m and 30 mN/m), the relaxation time (τ ) decreased continuously as a function of the aging time, as did the fitted exponent, which transitioned from being compressed (>1 ) to stretched (<1 ), indicating that the monolayer underwent a stress release through crystalline domain reorganization. However, for a monolayer compressed to a higher surface pressure (around 40 mN/m), the relaxation time increased continuously and the compressed exponent varied very little from a value of 1.6, suggesting that the system may have been highly stressed and jammed. Despite the interesting stress relaxation signatures seen in these samples, the structural ordering of the monolayer remained the same over the sample lifetime, as revealed by grazing incidence x-ray diffraction.
ERIC Educational Resources Information Center
Brendefur, Jonathan
2014-01-01
Much research has been conducted on how elementary students develop mathematical understanding and subsequently how teachers might use this information. This article builds on this type of work by investigating how one high-school algebra teacher designs and conducts a lesson on exponential functions. Through a lesson study format she studies with…
A Spectral Lyapunov Function for Exponentially Stable LTV Systems
NASA Technical Reports Server (NTRS)
Zhu, J. Jim; Liu, Yong; Hang, Rui
2010-01-01
This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-01
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-16
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Jianping; Han, Dun
2017-01-01
In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.
SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geneser, S; Paulsson, A; Sneed, P
Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to themore » thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.« less
Zhang, Jingyan; Ge, Zhishen; Jiang, Xiaoze; Hassan, P A; Liu, Shiyong
2007-12-15
The kinetics and mechanism of sphere-to-rod transitions of sodium alkyl sulfate micelles induced by hydrotropic salt, p-toluidine hydrochloride (PTHC), were investigated by stopped-flow with light scattering detection. Spherical sodium dodecyl sulfate (SDS) micelles transform into short ellipsoidal shapes at low salt concentrations ([PTHC]/[SDS], chi(PTHC)=0.3 and 0.4). Upon stopped-flow mixing aqueous solutions of spherical SDS micelles with PTHC, the scattered light intensity gradually increases with time. Single exponential fitting of the dynamic traces leads to characteristic relaxation time, tau(g), for the growth process from spherical to ellipsoidal micelles, and it increases with increasing SDS concentrations. This suggests that ellipsoidal micelles might be produced by successive insertion of unimers into spherical micelles, similar to the case of formation of spherical micelles as suggested by Aniansson-Wall (A-W) theory. At chi(PTHC) > or = 0.5, rod-like micelles with much higher axial ratio form. The scattered light intensity exhibits an initially abrupt increase and then levels off. The dynamic curves can be well fitted with single exponential functions, and the obtained tau(g) decreases with increasing SDS concentration. Thus, the growth from spherical to rod-like micelles might proceed via fusion of spherical micelles, in agreement with mechanism proposed by Ikeda et al. At chi(PTHC)=0.3 and 0.6, the apparent activation energies obtained from temperature dependent kinetic studies for the micellar growth are 40.4 and 3.6 kJ/mol, respectively. The large differences between activation energies for the growth from spherical to ellipsoidal micelles at low chi(PTHC) and the sphere-to-rod transition at high chi(PTHC) further indicate that they should follow different mechanisms. Moreover, the sphere-to-rod transition kinetics of sodium alkyl sulfate with varying hydrophobic chain lengths (n=10, 12, 14, and 16) are also studied. The longer the carbon chain lengths, the slower the sphere-to-rod transition.
Borah, Bhaskar J; Jobic, H; Yashonath, S
2010-04-14
We report the quasielastic neutron scattering (QENS) and molecular dynamics (MD) investigations into diffusion of pentane isomers in zeolite NaY. The molecular cross section perpendicular to the long molecular axis varies for the three isomers while the mass and the isomer-zeolite interaction remains essentially unchanged. Both QENS and MD results show that the branched isomers neopentane and isopentane have higher self-diffusivities as compared with n-pentane at 300 K in NaY zeolite. This result provides direct experimental evidence for the existence of nonmonotonic, anomalous dependence of self-diffusivity on molecular diameter known as the levitation effect. The energetic barrier at the bottleneck derived from MD simulations exists for n-pentane which lies in the linear regime while no such barrier is seen for neopentane which is located clearly in the anomalous regime. Activation energy is in the order E(a)(n-pentane)>E(a)(isopentane)>E(a)(neopentane) consistent with the predictions of the levitation effect. In the liquid phase, it is seen that D(n-pentane)>D(isopentane)>D(neopentane) and E(a)(n-pentane)
NASA Astrophysics Data System (ADS)
Borah, Bhaskar J.; Jobic, H.; Yashonath, S.
2010-04-01
We report the quasielastic neutron scattering (QENS) and molecular dynamics (MD) investigations into diffusion of pentane isomers in zeolite NaY. The molecular cross section perpendicular to the long molecular axis varies for the three isomers while the mass and the isomer-zeolite interaction remains essentially unchanged. Both QENS and MD results show that the branched isomers neopentane and isopentane have higher self-diffusivities as compared with n-pentane at 300 K in NaY zeolite. This result provides direct experimental evidence for the existence of nonmonotonic, anomalous dependence of self-diffusivity on molecular diameter known as the levitation effect. The energetic barrier at the bottleneck derived from MD simulations exists for n-pentane which lies in the linear regime while no such barrier is seen for neopentane which is located clearly in the anomalous regime. Activation energy is in the order Ea(n-pentane)>Ea(isopentane)>Ea(neopentane) consistent with the predictions of the levitation effect. In the liquid phase, it is seen that D(n-pentane)>D(isopentane)>D(neopentane) and Ea(n-pentane)
Modeling the survival of Salmonella spp. in chorizos.
Hajmeer, M; Basheer, I; Hew, C; Cliver, D O
2006-03-01
The survival of Salmonella spp. in chorizos has been studied under the effect of storage conditions; namely temperature (T=6, 25, 30 degrees C), air inflow velocity (F=0, 28.4 m/min), and initial water activity (a(w0)=0.85, 0.90, 0.93, 0.95, 0.97). The pH was held at 5.0. A total of 20 survival curves were experimentally obtained at various combinations of operating conditions. The chorizos were stored under four conditions: in the refrigerator (Ref: T=6 degrees C, F=0 m/min), at room temperature (RT: T=25 degrees C, F=0 m/min), in the hood (Hd: T=25 degrees C, F=28.4 m/min), and in the incubator (Inc: T=30 degrees C, F=0 m/min). Semi-logarithmic plots of counts vs. time revealed nonlinear trends for all the survival curves, indicating that the first-order kinetics model (exponential distribution function) was not suitable. The Weibull cumulative distribution function, for which the exponential function is only a special case, was selected and used to model the survival curves. The Weibull model was fitted to the 20 curves and the model parameters (alpha and beta) were determined. The fitted survival curves agreed with the experimental data with R(2)=0.951, 0.969, 0.908, and 0.871 for the Ref, RT, Hd, and Inc curves, respectively. Regression models relating alpha and beta to T, F, and a(w0) resulted in R(2) values of 0.975 for alpha and 0.988 for beta. The alpha and beta models can be used to generate a survival curve for Salmonella in chorizos for a given set of operating conditions. Additionally, alpha and beta can be used to determine the times needed to reduce the count by 1 or 2 logs t(1D) and t(2D). It is concluded that the Weibull cumulative distribution function offers a powerful model for describing microbial survival data. A comparison with the pathogen modeling program (PMP) revealed that the survival kinetics of Salmonella spp. in chorizos could not be adequately predicted using PMP which underestimated the t(1D) and t(2D). The mean of the Weibull probability density function correlated strongly with t(1D) and t(2D), and can serve as an alternative to the D-values normally used with first-order kinetic models. Parametric studies were conducted and sensitivity of survival to operating conditions was evaluated and discussed in the paper. The models derived herein provide a means for the development of a reliable risk assessment system for controlling Salmonella spp. in chorizos.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
Hardware accelerator of convolution with exponential function for image processing applications
NASA Astrophysics Data System (ADS)
Panchenko, Ivan; Bucha, Victor
2015-12-01
In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.
Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C G; Ong, M M; Perkins, M P
2010-02-12
Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less
Thoron, radon and air ions spatial distribution in indoor air.
Kolarž, Predrag; Vaupotič, Janja; Kobal, Ivan; Ujić, Predrag; Stojanovska, Zdenka; Žunić, Zora S
2017-07-01
Spatial distribution of radioactive gasses thoron (Tn) and radon (Rn) in indoor air of 9 houses mostly during winter period of 2013 has been studied. According to properties of alpha decay of both elements, air ionization was also measured. Simultaneous continual measurements using three Rn/Tn and three air-ion active instruments deployed on to three different distances from the wall surface have shown various outcomes. It has turned out that Tn and air ions concentrations decrease with the distance increase, while Rn remained uniformly distributed. Exponential fittings function for Tn variation with distance was used for the diffusion length and constant as well as the exhalation rate determination. The obtained values were similar with experimental data reported in the literature. Concentrations of air ions were found to be in relation with Rn and obvious, but to a lesser extent, with Tn. Copyright © 2016 Elsevier Ltd. All rights reserved.