NASA Astrophysics Data System (ADS)
Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.
2014-10-01
In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.
1986-01-01
A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun
2016-05-01
The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Phytoplankton productivity in relation to light intensity: A simple equation
Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.
1987-01-01
A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.
A Fifth-order Symplectic Trigonometrically Fitted Partitioned Runge-Kutta Method
NASA Astrophysics Data System (ADS)
Kalogiratou, Z.; Monovasilis, Th.; Simos, T. E.
2007-09-01
Trigonometrically fitted symplectic Partitioned Runge Kutta (EFSPRK) methods for the numerical integration of Hamoltonian systems with oscillatory solutions are derived. These methods integrate exactly differential systems whose solutions can be expressed as linear combinations of the set of functions sin(wx),cos(wx), w∈R. We modify a fifth order symplectic PRK method with six stages so to derive an exponentially fitted SPRK method. The methods are tested on the numerical integration of the two body problem.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2013-10-01
In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.
Exponential Correlation of IQ and the Wealth of Nations
ERIC Educational Resources Information Center
Dickerson, Richard E.
2006-01-01
Plots of mean IQ and per capita real Gross Domestic Product for groups of 81 and 185 nations, as collected by Lynn and Vanhanen, are best fitted by an exponential function of the form: GDP = "a" * 10["b"*(IQ)], where "a" and "b" are empirical constants. Exponential fitting yields markedly higher correlation coefficients than either linear or…
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
A multigrid solver for the semiconductor equations
NASA Technical Reports Server (NTRS)
Bachmann, Bernhard
1993-01-01
We present a multigrid solver for the exponential fitting method. The solver is applied to the current continuity equations of semiconductor device simulation in two dimensions. The exponential fitting method is based on a mixed finite element discretization using the lowest-order Raviart-Thomas triangular element. This discretization method yields a good approximation of front layers and guarantees current conservation. The corresponding stiffness matrix is an M-matrix. 'Standard' multigrid solvers, however, cannot be applied to the resulting system, as this is dominated by an unsymmetric part, which is due to the presence of strong convection in part of the domain. To overcome this difficulty, we explore the connection between Raviart-Thomas mixed methods and the nonconforming Crouzeix-Raviart finite element discretization. In this way we can construct nonstandard prolongation and restriction operators using easily computable weighted L(exp 2)-projections based on suitable quadrature rules and the upwind effects of the discretization. The resulting multigrid algorithm shows very good results, even for real-world problems and for locally refined grids.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Beier, Hope T.
2016-03-01
Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.
Bishai, David; Opuni, Marjorie
2009-01-01
Background Time trends in infant mortality for the 20th century show a curvilinear pattern that most demographers have assumed to be approximately exponential. Virtually all cross-country comparisons and time series analyses of infant mortality have studied the logarithm of infant mortality to account for the curvilinear time trend. However, there is no evidence that the log transform is the best fit for infant mortality time trends. Methods We use maximum likelihood methods to determine the best transformation to fit time trends in infant mortality reduction in the 20th century and to assess the importance of the proper transformation in identifying the relationship between infant mortality and gross domestic product (GDP) per capita. We apply the Box Cox transform to infant mortality rate (IMR) time series from 18 countries to identify the best fitting value of lambda for each country and for the pooled sample. For each country, we test the value of λ against the null that λ = 0 (logarithmic model) and against the null that λ = 1 (linear model). We then demonstrate the importance of selecting the proper transformation by comparing regressions of ln(IMR) on same year GDP per capita against Box Cox transformed models. Results Based on chi-squared test statistics, infant mortality decline is best described as an exponential decline only for the United States. For the remaining 17 countries we study, IMR decline is neither best modelled as logarithmic nor as a linear process. Imposing a logarithmic transform on IMR can lead to bias in fitting the relationship between IMR and GDP per capita. Conclusion The assumption that IMR declines are exponential is enshrined in the Preston curve and in nearly all cross-country as well as time series analyses of IMR data since Preston's 1975 paper, but this assumption is seldom correct. Statistical analyses of IMR trends should assess the robustness of findings to transformations other than the log transform. PMID:19698144
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
Quantitative photoplethysmography: Lambert-Beer law or inverse function incorporating light scatter.
Cejnar, M; Kobler, H; Hunyor, S N
1993-03-01
Finger blood volume is commonly determined from measurement of infra-red (IR) light transmittance using the Lambert-Beer law of light absorption derived for use in non-scattering media, even when such transmission involves light scatter around the phalangeal bone. Simultaneous IR transmittance and finger volume were measured over the full dynamic range of vascular volumes in seven subjects and outcomes compared with data fitted according to the Lambert-Beer exponential function and an inverse function derived for light attenuation by scattering materials. Curves were fitted by the least-squares method and goodness of fit was compared using standard errors of estimate (SEE). The inverse function gave a better data fit in six of the subjects: mean SEE 1.9 (SD 0.7, range 0.7-2.8) and 4.6 (2.2, 2.0-8.0) respectively (p < 0.02, paired t-test). Thus, when relating IR transmittance to blood volume, as occurs in the finger during measurements of arterial compliance, an inverse function derived from a model of light attenuation by scattering media gives more accurate results than the traditional exponential fit.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Fenton, Tanis R; Anderson, Diane; Groh-Wargo, Sharon; Hoyos, Angela; Ehrenkranz, Richard A; Senterre, Thibault
2018-05-01
To examine how well growth velocity recommendations for preterm infants fit with current growth references: Fenton 2013, Olsen 2010, INTERGROWTH 2015, and the World Health Organization Growth Standard 2006. The Average (2-point), Exponential (2-point), Early (1-point) method weight-gains were calculated for 1,4,8,12, and 16-week time-periods. Growth references' weekly velocities (g/kg/d, gram/day and cm/week) were illustrated graphically with frequently-quoted 15 g/kg/d, 10-30 grams/day and 1 cm/week rates superimposed. The 15 g/kg/d and 1 cm/week growth velocity rates were calculated from 24-50 weeks, superimposed on the Fenton and Olsen preterm growth charts. The Average and Exponential g/kg/d estimates showed close agreement for all ages (range 5.0-18.9 g/kg/d), while the Early method yielded values as high as 41 g/kg/d. All 3 preterm growth references were similar to 15 g/kg/d rate at 34 weeks, but rates were higher prior and lower at older ages. For gram/day, the growth references changed from 10 to 30 grams/day for 24-33 weeks. Head growth rates generally fit the 1 cm/week velocity for 23-30 weeks, and length growth rates fit for 37-40 weeks. The calculated g/kg/d curves deviated from the growth charts, first downward, then steeply crossed the median curves near term. Human growth is not constant through gestation and early infancy. The frequently-quoted 15 g/kg/d, 10-30 gram/day and 1 cm/week only fit current growth references for limited time periods. Rates of 15-20 g/kg/d (calculated using average or exponential methods) are a reasonable goal for infants 23-36 weeks, but not beyond. Copyright © 2017 Elsevier Inc. All rights reserved.
A hybrid MD-kMC algorithm for folding proteins in explicit solvent.
Peter, Emanuel Karl; Shea, Joan-Emma
2014-04-14
We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Research on modified the estimates of NOx emissions combined the OMI and ground-based DOAS technique
NASA Astrophysics Data System (ADS)
Zhang, Qiong; Li*, Ang; Xie, Pinhua; Hu, Zhaokun; Wu, Fengcheng; Xu, Jin
2017-04-01
A new method to calibrate nitrogen dioxide (NO2) lifetimes and emissions from point sources using satellite measurements base on the mobile passive differential optical absorption spectroscopy (DOAS) and multi axis differential optical absorption spectroscopy (MAX-DOAS) is described. It is based on using the Exponentially-Modified Gaussian (EMG) fitting method to correct the line densities along the wind direction by fitting the mobile passive DOAS NO2 vertical column density (VCD). An effective lifetime and emission rate are then determined from the parameters of the fit. The obtained results were then compared with the results acquired by fitting OMI (Ozone Monitoring Instrument) NO2 using the above fitting method, the NOx emission rate was about 195.8mol/s, 160.6mol/s, respectively. The reason why the latter less than the former may be because the low spatial resolution of the satellite.
Using neural networks to represent potential surfaces as sums of products.
Manzhos, Sergei; Carrington, Tucker
2006-11-21
By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.
Fitting ERGMs on big networks.
An, Weihua
2016-09-01
The exponential random graph model (ERGM) has become a valuable tool for modeling social networks. In particular, ERGM provides great flexibility to account for both covariates effects on tie formations and endogenous network formation processes. However, there are both conceptual and computational issues for fitting ERGMs on big networks. This paper describes a framework and a series of methods (based on existent algorithms) to address these issues. It also outlines the advantages and disadvantages of the methods and the conditions to which they are most applicable. Selected methods are illustrated through examples. Copyright © 2016 Elsevier Inc. All rights reserved.
Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers
NASA Technical Reports Server (NTRS)
Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.
2010-01-01
This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.
A mathematical definition of the financial bubbles and crashes
NASA Astrophysics Data System (ADS)
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2007-09-01
We check the validity of the mathematical method of detecting financial bubbles or crashes, which is based on a data fitting with an exponential function. We show that the period of a bubble can be determined nearly uniquely independent of the precision of data. The method is widely applicable for stock market data such as the Internet bubble.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
A modified exponential behavioral economic demand model to better describe consumption data.
Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K
2015-12-01
Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.
ERIC Educational Resources Information Center
Holland, Paul W.; Thayer, Dorothy T.
2000-01-01
Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Marchand, A J; Hitti, E; Monge, F; Saint-Jalmes, H; Guillin, R; Duvauferrier, R; Gambarota, G
2014-11-01
To assess the feasibility of measuring diffusion and perfusion fraction in vertebral bone marrow using the intravoxel incoherent motion (IVIM) approach and to compare two fitting methods, i.e., the non-negative least squares (NNLS) algorithm and the more commonly used Levenberg-Marquardt (LM) non-linear least squares algorithm, for the analysis of IVIM data. MRI experiments were performed on fifteen healthy volunteers, with a diffusion-weighted echo-planar imaging (EPI) sequence at five different b-values (0, 50, 100, 200, 600 s/mm2), in combination with an STIR module to suppress the lipid signal. Diffusion signal decays in the first lumbar vertebra (L1) were fitted to a bi-exponential function using the LM algorithm and further analyzed with the NNLS algorithm to calculate the values of the apparent diffusion coefficient (ADC), pseudo-diffusion coefficient (D*) and perfusion fraction. The NNLS analysis revealed two diffusion components only in seven out of fifteen volunteers, with ADC=0.60±0.09 (10(-3) mm(2)/s), D*=28±9 (10(-3) mm2/s) and perfusion fraction=14%±6%. The values obtained by the LM bi-exponential fit were: ADC=0.45±0.27 (10(-3) mm2/s), D*=63±145 (10(-3) mm2/s) and perfusion fraction=27%±17%. Furthermore, the LM algorithm yielded values of perfusion fraction in cases where the decay was not bi-exponential, as assessed by NNLS analysis. The IVIM approach allows for measuring diffusion and perfusion fraction in vertebral bone marrow; its reliability can be improved by using the NNLS, which identifies the diffusion decays that display a bi-exponential behavior. Copyright © 2014 Elsevier Inc. All rights reserved.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Characterization of radiation belt electron energy spectra from CRRES observations
NASA Astrophysics Data System (ADS)
Johnston, W. R.; Lindstrom, C. D.; Ginet, G. P.
2010-12-01
Energetic electrons in the outer radiation belt and the slot region exhibit a wide variety of energy spectral forms, more so than radiation belt protons. We characterize the spatial and temporal dependence of these forms using observations from the CRRES satellite Medium Electron Sensor A (MEA) and High-Energy Electron Fluxmeter (HEEF) instruments, together covering an energy range 0.15-8 MeV. Spectra were classified with two independent methods, data clustering and curve-fitting analyses, in each case defining categories represented by power law, exponential, and bump-on-tail (BOT) or other complex shapes. Both methods yielded similar results, with BOT, exponential, and power law spectra respectively dominating in the slot region, outer belt, and regions just beyond the outer belt. The transition from exponential to power law spectra occurs at higher L for lower magnetic latitude. The location of the transition from exponential to BOT spectra is highly correlated with the location of the plasmapause. In the slot region during the days following storm events, electron spectra were observed to evolve from exponential to BOT yielding differential flux minima at 350-650 keV and maxima at 1.5-2 MeV; such evolution has been attributed to energy-dependent losses from scattering by whistler hiss.
Howard, Robert W
2014-09-01
The power law of practice holds that a power function best interrelates skill performance and amount of practice. However, the law's validity and generality are moot. Some researchers argue that it is an artifact of averaging individual exponential curves while others question whether the law generalizes to complex skills and to performance measures other than response time. The present study tested the power law's generality to development over many years of a very complex cognitive skill, chess playing, with 387 skilled participants, most of whom were grandmasters. A power or logarithmic function best fit grouped data but individuals showed much variability. An exponential function usually was the worst fit to individual data. Groups differing in chess talent were compared and a power function best fit the group curve for the more talented players while a quadratic function best fit that for the less talented. After extreme amounts of practice, a logarithmic function best fit grouped data but a quadratic function best fit most individual curves. Individual variability is great and the power law or an exponential law are not the best descriptions of individual chess skill development. Copyright © 2014 Elsevier B.V. All rights reserved.
Paul A. Murphy; Robert M. Farrar
1981-01-01
In this study, 588 before-cut and 381 after-cut diameter distributions of uneven-aged loblolly-shortleaf pinestands were fitted to two different forms of the exponential probability density function. The left truncated and doubly truncated forms of the exponential were used.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Species area relationships in mediterranean-climate plant communities
Keeley, Jon E.; Fotheringham, C.J.
2003-01-01
Aim To determine the best-fit model of species–area relationships for Mediterranean-type plant communities and evaluate how community structure affects these species–area models.Location Data were collected from California shrublands and woodlands and compared with literature reports for other Mediterranean-climate regions.Methods The number of species was recorded from 1, 100 and 1000 m2 nested plots. Best fit to the power model or exponential model was determined by comparing adjusted r2 values from the least squares regression, pattern of residuals, homoscedasticity across scales, and semi-log slopes at 1–100 m2 and 100–1000 m2. Dominance–diversity curves were tested for fit to the lognormal model, MacArthur's broken stick model, and the geometric and harmonic series.Results Early successional Western Australia and California shrublands represented the extremes and provide an interesting contrast as the exponential model was the best fit for the former, and the power model for the latter, despite similar total species richness. We hypothesize that structural differences in these communities account for the different species–area curves and are tied to patterns of dominance, equitability and life form distribution. Dominance–diversity relationships for Western Australian heathlands exhibited a close fit to MacArthur's broken stick model, indicating more equitable distribution of species. In contrast, Californian shrublands, both postfire and mature stands, were best fit by the geometric model indicating strong dominance and many minor subordinate species. These regions differ in life form distribution, with annuals being a major component of diversity in early successional Californian shrublands although they are largely lacking in mature stands. Both young and old Australian heathlands are dominated by perennials, and annuals are largely absent. Inherent in all of these ecosystems is cyclical disequilibrium caused by periodic fires. The potential for community reassembly is greater in Californian shrublands where only a quarter of the flora resprout, whereas three quarters resprout in Australian heathlands.Other Californian vegetation types sampled include coniferous forests, oak savannas and desert scrub, and demonstrate that different community structures may lead to a similar species–area relationship. Dominance–diversity relationships for coniferous forests closely follow a geometric model whereas associated oak savannas show a close fit to the lognormal model. However, for both communities, species–area curves fit a power model. The primary driver appears to be the presence of annuals. Desert scrub communities illustrate dramatic changes in both species diversity and dominance–diversity relationships in high and low rainfall years, because of the disappearance of annuals in drought years.Main conclusions Species–area curves for immature shrublands in California and the majority of Mediterranean plant communities fit a power function model. Exceptions that fit the exponential model are not because of sampling error or scaling effects, rather structural differences in these communities provide plausible explanations. The exponential species–area model may arise in more than one way. In the highly diverse Australian heathlands it results from a rapid increase in species richness at small scales. In mature California shrublands it results from very depauperate richness at the community scale. In both instances the exponential model is tied to a preponderance of perennials and paucity of annuals. For communities fit by a power model, coefficients z and log c exhibit a number of significant correlations with other diversity parameters, suggesting that they have some predictive value in ecological communities.
Feasibility study on the least square method for fitting non-Gaussian noise data
NASA Astrophysics Data System (ADS)
Xu, Wei; Chen, Wen; Liang, Yingjie
2018-02-01
This study is to investigate the feasibility of least square method in fitting non-Gaussian noise data. We add different levels of the two typical non-Gaussian noises, Lévy and stretched Gaussian noises, to exact value of the selected functions including linear equations, polynomial and exponential equations, and the maximum absolute and the mean square errors are calculated for the different cases. Lévy and stretched Gaussian distributions have many applications in fractional and fractal calculus. It is observed that the non-Gaussian noises are less accurately fitted than the Gaussian noise, but the stretched Gaussian cases appear to perform better than the Lévy noise cases. It is stressed that the least-squares method is inapplicable to the non-Gaussian noise cases when the noise level is larger than 5%.
NASA Astrophysics Data System (ADS)
Dinesh Kumar, S.; Nageshwar Rao, R.; Pramod Chakravarthy, P.
2017-11-01
In this paper, we consider a boundary value problem for a singularly perturbed delay differential equation of reaction-diffusion type. We construct an exponentially fitted numerical method using Numerov finite difference scheme, which resolves not only the boundary layers but also the interior layers arising from the delay term. An extensive amount of computational work has been carried out to demonstrate the applicability of the proposed method.
Asquith, William H.
2014-01-01
The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.
2012-09-01
used in this paper to compare probability density functions, the Lilliefors test and the Kullback - Leibler distance. The Lilliefors test is a goodness ... of interest in this study are the Rayleigh distribution and the exponential distribution. The Lilliefors test is used to test goodness - of - fit for...Lilliefors test for goodness of fit with an exponential distribution. These results suggests that,
An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood
NASA Astrophysics Data System (ADS)
Dinh, Khanh N.; Sidje, Roger B.
2017-12-01
Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis
NASA Astrophysics Data System (ADS)
Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.
2013-04-01
We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
Hertäg, Loreen; Hass, Joachim; Golovko, Tatiana; Durstewitz, Daniel
2012-01-01
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ("in vivo-like") input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a "high-throughput" model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Shim, Woo Hyun; Kim, Ho Sung; Choi, Choong-Gon; Kim, Sang Joon
2015-01-01
Brain tumor cellularity has been assessed by using apparent diffusion coefficient (ADC). However, the ADC value might be influenced by both perfusion and true molecular diffusion, and the perfusion effect on ADC can limit the reliability of ADC in the characterization of tumor cellularity, especially, in hypervascular brain tumors. In contrast, the IVIM technique estimates parameter values for diffusion and perfusion effects separately. The purpose of our study was to compare ADC and IVIM for differentiating among glioblastoma, metastatic tumor, and primary CNS lymphoma (PCNSL) focusing on diffusion-related parameter. We retrospectively reviewed the data of 128 patients with pathologically confirmed glioblastoma (n = 55), metastasis (n = 31), and PCNSL (n = 42) prior to any treatment. Two neuroradiologists independently calculated the maximum IVIM-f (fmax) and minimum IVIM-D (Dmin) by using 16 different b-values with a bi-exponential fitting of diffusion signal decay, minimum ADC (ADCmin) by using 0 and 1000 b-values with a mono-exponential fitting and maximum normalized cerebral blood volume (nCBVmax). The differences in fmax, Dmin, nCBVmax, and ADCmin among the three tumor pathologies were determined by one-way ANOVA with multiple comparisons. The fmax and Dmin were correlated to the corresponding nCBV and ADC using partial correlation analysis, respectively. Using a mono-exponential fitting of diffusion signal decay, the mean ADCmin was significantly lower in PCNSL than in glioblastoma and metastasis. However, using a bi-exponential fitting, the mean Dmin did not significantly differ in the three groups. The mean fmax significantly increased in the glioblastomas (reader 1, 0.103; reader 2, 0.109) and the metastasis (reader 1, 0.105; reader 2, 0.107), compared to the primary CNS lymphomas (reader 1, 0.025; reader 2, 0.023) (P < .001 for each). The correlation between fmax and the corresponding nCBV was highest in glioblastoma group, and the correlation between Dmin and the corresponding ADC was highest in primary CNS lymphomas group. Unlike ADC value derived from a mono-exponential fitting of diffusion signal, diffusion-related parametric value derived from a bi-exponential fitting with separation of perfusion effect doesn't differ among glioblastoma, metastasis, and PCNSL.
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
The validation of a generalized Hooke's law for coronary arteries.
Wang, Chong; Zhang, Wei; Kassab, Ghassan S
2008-01-01
The exponential form of constitutive model is widely used in biomechanical studies of blood vessels. There are two main issues, however, with this model: 1) the curve fits of experimental data are not always satisfactory, and 2) the material parameters may be oversensitive. A new type of strain measure in a generalized Hooke's law for blood vessels was recently proposed by our group to address these issues. The new model has one nonlinear parameter and six linear parameters. In this study, the stress-strain equation is validated by fitting the model to experimental data of porcine coronary arteries. Material constants of left anterior descending artery and right coronary artery for the Hooke's law were computed with a separable nonlinear least-squares method with an excellent goodness of fit. A parameter sensitivity analysis shows that the stability of material constants is improved compared with the exponential model and a biphasic model. A boundary value problem was solved to demonstrate that the model prediction can match the measured arterial deformation under experimental loading conditions. The validated constitutive relation will serve as a basis for the solution of various boundary value problems of cardiovascular biomechanics.
An interactive program for pharmacokinetic modeling.
Lu, D R; Mao, F
1993-05-01
A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
Study on peak shape fitting method in radon progeny measurement.
Yang, Jinmin; Zhang, Lei; Abdumomin, Kadir; Tang, Yushi; Guo, Qiuju
2015-11-01
Alpha spectrum measurement is one of the most important methods to measure radon progeny concentration in environment. However, the accuracy of this method is affected by the peak tailing due to the energy losses of alpha particles. This article presents a peak shape fitting method that can overcome the peak tailing problem in most situations. On a typical measured alpha spectrum curve, consecutive peaks overlap even their energies are not close to each other, and it is difficult to calculate the exact count of each peak. The peak shape fitting method uses combination of Gaussian and exponential functions, which can depict features of those peaks, to fit the measured curve. It can provide net counts of each peak explicitly, which was used in the Kerr method of calculation procedure for radon progeny concentration measurement. The results show that the fitting curve fits well with the measured curve, and the influence of the peak tailing is reduced. The method was further validated by the agreement between radon equilibrium equivalent concentration based on this method and the measured values of some commercial radon monitors, such as EQF3220 and WLx. In addition, this method improves the accuracy of individual radon progeny concentration measurement. Especially for the (218)Po peak, after eliminating the peak tailing influence, the calculated result of (218)Po concentration has been reduced by 21 %. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Prony series spectra of structural relaxation in N-BK7 for finite element modeling.
Koontz, Erick; Blouin, Vincent; Wachtel, Peter; Musgraves, J David; Richardson, Kathleen
2012-12-20
Structural relaxation behavior of N-BK7 glass was characterized at temperatures 20 °C above and below T(12) for this glass, using a thermo mechanical analyzer (TMA). T(12) is a characteristic temperature corresponding to a viscosity of 10(12) Pa·s. The glass was subject to quick temperature down-jumps preceded and followed by long isothermal holds. The exponential-like decay of the sample height was recorded and fitted using a unique Prony series method. The result of his method was a plot of the fit parameters revealing the presence of four distinct peaks or distributions of relaxation times. The number of relaxation times decreased as final test temperature was increased. The relaxation times did not shift significantly with changing temperature; however, the Prony weight terms varied essentially linearly with temperature. It was also found that the structural relaxation behavior of the glass trended toward single exponential behavior at temperatures above the testing range. The result of the analysis was a temperature-dependent Prony series model that can be used in finite element modeling of glass behavior in processes such as precision glass molding (PGM).
Statistics of Optical Coherence Tomography Data From Human Retina
de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo
2010-01-01
Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733
NASA Technical Reports Server (NTRS)
Atwell, William; Tylka, Allan; Dietrich, William; Badavi, Francis; Rojdev, Kristina
2011-01-01
Several methods for analyzing the particle spectra from extremely large solar proton events, called Ground-Level Enhancements (GLEs), have been developed and utilized by the scientific community to describe the solar proton energy spectra and have been further applied to ascertain the radiation exposures to humans and radio-sensitive systems, namely electronics. In this paper 12 GLEs dating back to 1956 are discussed, and the three methods for describing the solar proton energy spectra are reviewed. The three spectral fitting methodologies are EXP [an exponential in proton rigidity (R)], WEIB [Weibull fit: an exponential in proton energy], and the Band function (BAND) [a double power law in proton rigidity]. The EXP and WEIB methods use low energy (MeV) GLE solar proton data and make extrapolations out to approx.1 GeV. On the other hand, the BAND method utilizes low- and medium-energy satellite solar proton data combined with high-energy solar proton data deduced from high-latitude neutron monitoring stations. Thus, the BAND method completely describes the entire proton energy spectrum based on actual solar proton observations out to 10 GeV. Using the differential spectra produced from each of the 12 selected GLEs for each of the three methods, radiation exposures are presented and discussed in detail. These radiation exposures are then compared with the current 30-day and annual crew exposure limits and the radiation effects to electronics.
A comparison of methods of fitting several models to nutritional response data.
Vedenov, D; Pesti, G M
2008-02-01
A variety of models have been proposed to fit nutritional input-output response data. The models are typically nonlinear; therefore, fitting the models usually requires sophisticated statistical software and training to use it. An alternative tool for fitting nutritional response models was developed by using widely available and easier-to-use Microsoft Excel software. The tool, implemented as an Excel workbook (NRM.xls), allows simultaneous fitting and side-by-side comparisons of several popular models. This study compared the results produced by the tool we developed and PROC NLIN of SAS. The models compared were the broken line (ascending linear and quadratic segments), saturation kinetics, 4-parameter logistics, sigmoidal, and exponential models. The NRM.xls workbook provided results nearly identical to those of PROC NLIN. Furthermore, the workbook successfully fit several models that failed to converge in PROC NLIN. Two data sets were used as examples to compare fits by the different models. The results suggest that no particular nonlinear model is necessarily best for all nutritional response data.
The Lunar Rock Size Frequency Distribution from Diviner Infrared Measurements
NASA Astrophysics Data System (ADS)
Elder, C. M.; Hayne, P. O.; Piqueux, S.; Bandfield, J.; Williams, J. P.; Ghent, R. R.; Paige, D. A.
2016-12-01
Knowledge of the rock size frequency distribution on a planetary body is important for understanding its geologic history and for selecting landing sites. The rock size frequency distribution can be estimated by counting rocks in high resolution images, but most bodies in the solar system have limited areas with adequate coverage. We propose an alternative method to derive and map rock size frequency distributions using multispectral thermal infrared data acquired at multiple times during the night. We demonstrate this new technique for the Moon using data from the Lunar Reconnaissance Orbiter (LRO) Diviner radiometer in conjunction with three dimensional thermal modeling, leveraging the differential cooling rates of different rock sizes. We assume an exponential rock size frequency distribution, which has been shown to yield a good fit to rock populations in various locations on the Moon, Mars, and Earth [2, 3] and solve for the best radiance fits as a function of local time and wavelength. This method presents several advantages: 1) unlike other thermally derived rock abundance techniques, it is sensitive to rocks smaller than the diurnal skin depth; 2) it does not result in apparent decrease in rock abundance at night; and 3) it can be validated using images taken at the lunar surface. This method yields both the fraction of the surface covered in rocks of all sizes and the exponential factor, which defines the rate of drop-off in the exponential function at large rock sizes. We will present maps of both these parameters for the Moon, and provide a geological interpretation. In particular, this method reveals rocks in the lunar highlands that are smaller than previous thermal methods could detect. [1] Bandfield J. L. et al. (2011) JGR, 116, E00H02. [2] Golombek and Rapp (1997) JGR, 102, E2, 4117-4129. [3] Cintala, M.J. and K.M. McBride (1995) NASA Technical Memorandum 104804.
Exploiting the Adaptation Dynamics to Predict the Distribution of Beneficial Fitness Effects
2016-01-01
Adaptation of asexual populations is driven by beneficial mutations and therefore the dynamics of this process, besides other factors, depends on the distribution of beneficial fitness effects. It is known that on uncorrelated fitness landscapes, this distribution can only be of three types: truncated, exponential and power law. We performed extensive stochastic simulations to study the adaptation dynamics on rugged fitness landscapes, and identified two quantities that can be used to distinguish the underlying distribution of beneficial fitness effects. The first quantity studied here is the fitness difference between successive mutations that spread in the population, which is found to decrease in the case of truncated distributions, remains nearly a constant for exponentially decaying distributions and increases when the fitness distribution decays as a power law. The second quantity of interest, namely, the rate of change of fitness with time also shows quantitatively different behaviour for different beneficial fitness distributions. The patterns displayed by the two aforementioned quantities are found to hold good for both low and high mutation rates. We discuss how these patterns can be exploited to determine the distribution of beneficial fitness effects in microbial experiments. PMID:26990188
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
Kawasaki, Yohei; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern. PMID:27761346
On the Time-Dependent Analysis of Gamow Decay
ERIC Educational Resources Information Center
Durr, Detlef; Grummt, Robert; Kolb, Martin
2011-01-01
Gamow's explanation of the exponential decay law uses complex "eigenvalues" and exponentially growing "eigenfunctions". This raises the question, how Gamow's description fits into the quantum mechanical description of nature, which is based on real eigenvalues and square integrable wavefunctions. Observing that the time evolution of any…
Vibronic relaxation dynamics of o-dichlorobenzene in its lowest excited singlet state
NASA Astrophysics Data System (ADS)
Liu, Benkang; Zhao, Haiyan; Lin, Xiang; Li, Xinxin; Gao, Mengmeng; Wang, Li; Wang, Wei
2018-01-01
Vibronic dynamics of o-dichlorobenzene in its lowest excited singlet state, S1, is investigated in real time by using femtosecond pump-probe method, combined with time-of-flight mass spectroscopy and photoelectron velocity mapping technique. Relaxation processes for the excitation in the range of 276-252 nm can be fitted by single exponential decay model, while in the case of wavelength shorter than 252 nm two-exponential decay model must be adopted for simulating transient profiles. Lifetime constants of the vibrationally excited S1 states change from 651 ± 10 ps for 276 nm excitation to 61 ± 1 ps for 242 nm excitation. Both the internal conversion from the S1 to the highly vibrationally excited ground state S0 and the intersystem crossing from the S1 to the triplet state are supposed to play important roles in de-excitation processes. Exponential fitting of the de-excitation rates on the excitation energy implies such de-excitation process starts from the highly vibrationally excited S0 state, which is validated, by probing the relaxation following photoexcitation at 281 nm, below the S1 origin. Time-dependent photoelectron kinetic energy distributions have been obtained experimentally. As the excitation wavelength changes from 276 nm to 242 nm, different cationic vibronic vibrations can be populated, determined by the Franck-Condon factors between the large geometry distorted excited singlet states and final cationic states.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.
2015-01-01
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929
Reljin, Natasa; Reyes, Bersain A; Chon, Ki H
2015-04-27
In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.
Water quality trend analysis for the Karoon River in Iran.
Naddafi, K; Honari, H; Ahmadi, M
2007-11-01
The Karoon River basin, with a basin area of 67,000 km(2), is located in the southern part of Iran. Monthly measurements of the discharge and the water quality variables have been monitored at the Gatvand and Khorramshahr stations of the Karoon River on a monthly basis for the period 1967-2005 and 1969-2005 for Gatvand and Khorramshahr stations, respectively. In this paper the time series of monthly values of water quality parameters and the discharge were analyzed using statistical methods and the existence of trends and the evaluation of the best fitted models were performed. The Kolmogorov-Smirnov test was used to select the theoretical distribution which best fitted the data. Simple regression was used to examine the concentration-time relationships. The concentration-time relationships showed better correlation in Khorramshahr station than that of Gatvand station. The exponential model expresses better concentration - time relationships in Khorramshahr station, but in Gatvand station the logarithmic model is more fitted. The correlation coefficients are positive for all of the variables in Khorramshahr station also in Gatvand station all of the variables are positive except magnesium (Mg2+), bicarbonates (HCO3-) and temporary hardness which shows a decreasing relationship. The logarithmic and the exponential models describe better the concentration-time relationships for two stations.
Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.
Kirby, Kris N; Santiesteban, Mariana
2003-01-01
Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.
Design data for radars based on 13.9 GHz Skylab scattering coefficient measurements
NASA Technical Reports Server (NTRS)
Moore, R. K. (Principal Investigator)
1974-01-01
The author has identified the following significant results. Measurements made at 13.9 GHz with the radar scatterometer on Skylab have been combined to produce median curves of the variation of scattering coefficient with angle of incidence out to 45 deg. Because of the large number of observations, and the large area averaged for each measured data point, these curves may be used as a new design base for radars. A reasonably good fit at larger angles is obtained using the theoretical expression based on an exponential height correlation function and also using Lambert's law. For angles under 10 deg, a different fit based on the exponential correlation function, and a fit based on geometric optics expressions are both reasonably valid.
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1986-01-01
Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.
Lee, Peter N; Fry, John S; Thornton, Alison J
2014-02-01
We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
VizieR Online Data Catalog: Catalog of Kepler flare stars (Van Doorsselaere+, 2017)
NASA Astrophysics Data System (ADS)
van Doorsselaere, T.; Shariati, H.; Debosscher, J.
2017-11-01
With an automated detection method, we have identified stellar flares in the long cadence observations of Kepler during quarter 15. We list each flare time for the respective Kepler objects. Furthermore, we list the flare amplitude and decay time after fitting the flare light curve with an exponential decay. Flare start times in long cadence data of Kepler during quarter 15. (1 data file).
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
NASA Astrophysics Data System (ADS)
Smith, Clint; Edwards, Jarrod; Fisher, Andmorgan
2010-04-01
Rapid detection of biological material is critical for determining presence/absence of bacterial endospores within various investigative programs. Even more critical is that if select material tests positive for bacillus endospores then tests should provide data at the species level. Optical detection of microbial endospore formers such as Bacillus sp. can be heavy, cumbersome, and may only identify at the genus level. Data provided from this study will aid in characterization needed by future detection systems for further rapid breakdown analysis to gain insight into a more positive signature collection of Bacillus sp. Literature has shown that fluorescence spectroscopy of endospores could be statistically separated from other vegetative genera, but could not be separated among one another. Results of this study showed endospore species separation is possible using laser-induce fluorescence with lifetime decay analysis for Bacillus endospores. Lifetime decays of B. subtilis, B. megaterium, B. coagulans, and B. anthracis Sterne strain were investigated. Using the Multi-Exponential fit method data showed three distinct lifetimes for each species within the following ranges, 0.2-1.3 ns; 2.5-7.0 ns; 7.5-15.0 ns, when laser induced at 307 nm. The four endospore species were individually separated using principle component analysis (95% CI).
Optical coherence tomography assessment of vessel wall degradation in aneurysmatic thoracic aortas
NASA Astrophysics Data System (ADS)
Real, Eusebio; Eguizabal, Alma; Pontón, Alejandro; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José; Conde, Olga M.
2013-06-01
Optical coherence tomographic images of ascending thoracic human aortas from aneurysms exhibit disorders on the smooth muscle cell structure of the media layer of the aortic vessel as well as elastin degradation. Ex-vivo measurements of human samples provide results that correlate with pathologist diagnosis in aneurysmatic and control aortas. The observed disorders are studied as possible hallmarks for aneurysm diagnosis. To this end, the backscattering profile along the vessel thickness has been evaluated by fitting its decay against two different models, a third order polynomial fitting and an exponential fitting. The discontinuities present on the vessel wall on aneurysmatic aortas are slightly better identified with the exponential approach. Aneurysmatic aortic walls present uneven reflectivity decay when compared with healthy vessels. The fitting error has revealed as the most favorable indicator for aneurysm diagnosis as it provides a measure of how uniform is the decay along the vessel thickness.
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Cross-Conjugated Nanoarchitectures
2013-08-23
compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns
Theory, computation, and application of exponential splines
NASA Technical Reports Server (NTRS)
Mccartin, B. J.
1981-01-01
A generalization of the semiclassical cubic spline known in the literature as the exponential spline is discussed. In actuality, the exponential spline represents a continuum of interpolants ranging from the cubic spline to the linear spline. A particular member of this family is uniquely specified by the choice of certain tension parameters. The theoretical underpinnings of the exponential spline are outlined. This development roughly parallels the existing theory for cubic splines. The primary extension lies in the ability of the exponential spline to preserve convexity and monotonicity present in the data. Next, the numerical computation of the exponential spline is discussed. A variety of numerical devices are employed to produce a stable and robust algorithm. An algorithm for the selection of tension parameters that will produce a shape preserving approximant is developed. A sequence of selected curve-fitting examples are presented which clearly demonstrate the advantages of exponential splines over cubic splines.
TH-EF-207A-04: A Dynamic Contrast Enhanced Cone Beam CT Technique for Evaluation of Renal Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z; Shi, J; Yang, Y
Purpose: To develop a simple but robust method for the early detection and evaluation of renal functions using dynamic contrast enhanced cone beam CT technique. Methods: Experiments were performed on an integrated imaging and radiation research platform developed by our lab. Animals (n=3) were anesthetized with 20uL Ketamine/Xylazine cocktail, and then received 200uL injection of iodinated contrast agent Iopamidol via tail vein. Cone beam CT was acquired following contrast injection once per minute and up to 25 minutes. The cone beam CT was reconstructed with a dimension of 300×300×800 voxels of 130×130×130um voxel resolution. The middle kidney slices in themore » transvers and coronal planes were selected for image analysis. A double exponential function was used to fit the contrast enhanced signal intensity versus the time after contrast injection. Both pixel-based and region of interest (ROI)-based curve fitting were performed. Four parameters obtained from the curve fitting, namely the amplitude and flow constant for both contrast wash in and wash out phases, were investigated for further analysis. Results: Robust curve fitting was demonstrated for both pixel based (with R{sup 2}>0.8 for >85% pixels within the kidney contour) and ROI based (R{sup 2}>0.9 for all regions) analysis. Three different functional regions: renal pelvis, medulla and cortex, were clearly differentiated in the functional parameter map in the pixel based analysis. ROI based analysis showed the half-life T1/2 for contrast wash in and wash out phases were 0.98±0.15 and 17.04±7.16, 0.63±0.07 and 17.88±4.51, and 1.48±0.40 and 10.79±3.88 minutes for the renal pelvis, medulla and cortex, respectively. Conclusion: A robust method based on dynamic contrast enhanced cone beam CT and double exponential curve fitting has been developed to analyze the renal functions for different functional regions. Future study will be performed to investigate the sensitivity of this technique in the detection of radiation induced kidney dysfunction.« less
A Simple Mechanical Experiment on Exponential Growth
ERIC Educational Resources Information Center
McGrew, Ralph
2015-01-01
With a rod, cord, pulleys, and slotted masses, students can observe and graph exponential growth in the cord tension over a factor of increase as large as several hundred. This experiment is adaptable for use either in algebra-based or calculus-based physics courses, fitting naturally with the study of sliding friction. Significant parts of the…
Kinetic Analysis of the Main Temperature Stage of Fast Pyrolysis
NASA Astrophysics Data System (ADS)
Yang, Xiaoxiao; Zhao, Yuying; Xu, Lanshu; Li, Rui
2017-10-01
Kinetics of the thermal decomposition of eucalyptus chips was evaluated using a high-rate thermogravimetric analyzer (BL-TGA) designed by our research group. The experiments were carried out under non-isothermal condition in order to determine the fast pyrolysis behavior of the main temperature stage (350-540ºC) at heating rates of 60, 120, 180, and 360ºC min-1. The Coats-Redfern integral method and four different reaction mechanism models were adopted to calculate the kinetic parameters including apparent activation energy and pre-exponential factor, and the Flynn-Wall-Ozawa method was employed to testify apparent activation energy. The results showed that estimation value was consistent with the values obtained by linear fitting equations, and the best-fit model for fast pyrolysis was found.
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
Corzett, Christopher H; Goodman, Myron F; Finkel, Steven E
2013-06-01
Escherichia coli DNA polymerases (Pol) II, IV, and V serve dual roles by facilitating efficient translesion DNA synthesis while simultaneously introducing genetic variation that can promote adaptive evolution. Here we show that these alternative polymerases are induced as cells transition from exponential to long-term stationary-phase growth in the absence of induction of the SOS regulon by external agents that damage DNA. By monitoring the relative fitness of isogenic mutant strains expressing only one alternative polymerase over time, spanning hours to weeks, we establish distinct growth phase-dependent hierarchies of polymerase mutant strain competitiveness. Pol II confers a significant physiological advantage by facilitating efficient replication and creating genetic diversity during periods of rapid growth. Pol IV and Pol V make the largest contributions to evolutionary fitness during long-term stationary phase. Consistent with their roles providing both a physiological and an adaptive advantage during stationary phase, the expression patterns of all three SOS polymerases change during the transition from log phase to long-term stationary phase. Compared to the alternative polymerases, Pol III transcription dominates during mid-exponential phase; however, its abundance decreases to <20% during long-term stationary phase. Pol IV transcription dominates as cells transition out of exponential phase into stationary phase and a burst of Pol V transcription is observed as cells transition from death phase to long-term stationary phase. These changes in alternative DNA polymerase transcription occur in the absence of SOS induction by exogenous agents and indicate that cell populations require appropriate expression of all three alternative DNA polymerases during exponential, stationary, and long-term stationary phases to attain optimal fitness and undergo adaptive evolution.
NASA Astrophysics Data System (ADS)
Hashimoto, Chihiro; Panizza, Pascal; Rouch, Jacques; Ushiki, Hideharu
2005-10-01
A new analytical concept is applied to the kinetics of the shrinking process of poly(N-isopropylacrylamide) (PNIPA) gels. When PNIPA gels are put into hot water above the critical temperature, two-step shrinking is observed and the secondary shrinking of gels is fitted well by a stretched exponential function. The exponent β characterizing the stretched exponential is always higher than one, although there are few analytical concepts for the stretched exponential function with β>1. As a new interpretation for this function, we propose a superposition of step (Heaviside) function and a new distribution function of characteristic time is deduced.
NASA Astrophysics Data System (ADS)
Brown, J. S.; Shaheen, S. E.
2018-04-01
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
Brown, J S; Shaheen, S E
2018-04-04
Disorder in organic semiconductors has made it challenging to achieve performance gains; this is a result of the many competing and often nuanced mechanisms effecting charge transport. In this article, we attempt to illuminate one of these mechanisms in the hopes of aiding experimentalists in exceeding current performance thresholds. Using a heuristic exponential function, energetic correlation has been added to the Gaussian disorder model (GDM). The new model is grounded in the concept that energetic correlations can arise in materials without strong dipoles or dopants, but may be a result of an incomplete crystal formation process. The proposed correlation has been used to explain the exponential tail states often observed in these materials; it is also better able to capture the carrier mobility field dependence, commonly known as the Poole-Frenkel dependence, when compared to the GDM. Investigation of simulated current transients shows that the exponential tail states do not necessitate Montroll and Scher fits. Montroll and Scher fits occur in the form of two distinct power law curves that share a common constant in their exponent; they are clearly observed as linear lines when the current transient is plotted using a log-log scale. Typically, these fits have been found appropriate for describing amorphous silicon and other disordered materials which display exponential tail states. Furthermore, we observe the proposed correlation function leads to domains of energetically similar sites separated by boundaries where the site energies exhibit stochastic deviation. These boundary sites are found to be the source of the extended exponential tail states, and are responsible for high charge visitation frequency, which may be associated with the molecular turnover number and ultimately the material stability.
2010-04-01
of radiolabeling fusion proteins without the denaturing effects coincident with oxidative radio-iodination associated with the chloramine T method...organ PS product = [(%ID/g)/AUC]*1000 Reportable Outcomes (1) The plasma concentration decay curve for AGT-185 is shown in Figure 1. The % of...injected dose (ID)/mL decreases rapidly in plasma following IV injection. This plasma decay curve was fit to the bi-exponential equation described above
Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C G; Ong, M M; Perkins, M P
2010-02-12
Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less
Solving optimization problems by the public goods game
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2017-09-01
We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
NASA Astrophysics Data System (ADS)
Guarnieri, R.; Padilha, L.; Guarnieri, F.; Echer, E.; Makita, K.; Pinheiro, D.; Schuch, A.; Boeira, L.; Schuch, N.
Ultraviolet radiation type B (UV-B 280-315nm) is well known by its damage to life on Earth, including the possibility of causing skin cancer in humans. However, the atmo- spheric ozone has absorption bands in this spectral radiation, reducing its incidence on Earth's surface. Therefore, the ozone amount is one of the parameters, besides clouds, aerosols, solar zenith angles, altitude, albedo, that determine the UV-B radia- tion intensity reaching the Earth's surface. The total ozone column, in Dobson Units, determined by TOMS spectrometer on board of a NASA satellite, and UV-B radiation measurements obtained by a UV-B radiometer model MS-210W (Eko Instruments) were correlated. The measurements were obtained at the Observatório Espacial do Sul - Instituto Nacional de Pesquisas Espaciais (OES/CRSPE/INPE-MCT) coordinates: Lat. 29.44oS, Long. 53.82oW. The correlations were made using UV-B measurements in fixed solar zenith angles and only days with clear sky were selected in a period from July 1999 to December 2001. Moreover, the mathematic behavior of correlation in dif- ferent angles was observed, and correlation coefficients were determined by linear and first order exponential fits. In both fits, high correlation coefficients values were ob- tained, and the difference between linear and exponential fit can be considered small.
Apparent power-law distributions in animal movements can arise from intraspecific interactions
Breed, Greg A.; Severns, Paul M.; Edwards, Andrew M.
2015-01-01
Lévy flights have gained prominence for analysis of animal movement. In a Lévy flight, step-lengths are drawn from a heavy-tailed distribution such as a power law (PL), and a large number of empirical demonstrations have been published. Others, however, have suggested that animal movement is ill fit by PL distributions or contend a state-switching process better explains apparent Lévy flight movement patterns. We used a mix of direct behavioural observations and GPS tracking to understand step-length patterns in females of two related butterflies. We initially found movement in one species (Euphydryas editha taylori) was best fit by a bounded PL, evidence of a Lévy flight, while the other (Euphydryas phaeton) was best fit by an exponential distribution. Subsequent analyses introduced additional candidate models and used behavioural observations to sort steps based on intraspecific interactions (interactions were rare in E. phaeton but common in E. e. taylori). These analyses showed a mixed-exponential is favoured over the bounded PL for E. e. taylori and that when step-lengths were sorted into states based on the influence of harassing conspecific males, both states were best fit by simple exponential distributions. The direct behavioural observations allowed us to infer the underlying behavioural mechanism is a state-switching process driven by intraspecific interactions rather than a Lévy flight. PMID:25519992
Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio
2016-05-19
Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.
Human mobility in space from three modes of public transportation
NASA Astrophysics Data System (ADS)
Jiang, Shixiong; Guan, Wei; Zhang, Wenyi; Chen, Xu; Yang, Liu
2017-10-01
The human mobility patterns have drew much attention from researchers for decades, considering about its importance for urban planning and traffic management. In this study, the taxi GPS trajectories, smart card transaction data of subway and bus from Beijing are utilized to model human mobility in space. The original datasets are cleaned and processed to attain the displacement of each trip according to the origin and destination locations. Then, the Akaike information criterion is adopted to screen out the best fitting distribution for each mode from candidate ones. The results indicate that displacements of taxi trips follow the exponential distribution. Besides, the exponential distribution also fits displacements of bus trips well. However, their exponents are significantly different. Displacements of subway trips show great specialties and can be well fitted by the gamma distribution. It is obvious that human mobility of each mode is different. To explore the overall human mobility, the three datasets are mixed up to form a fusion dataset according to the annual ridership proportions. Finally, the fusion displacements follow the power-law distribution with an exponential cutoff. It is innovative to combine different transportation modes to model human mobility in the city.
In vivo chlorine and sodium MRI of rat brain at 21.1 T
Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.
2017-01-01
Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497
Exponential asymptotics of homoclinic snaking
NASA Astrophysics Data System (ADS)
Dean, A. D.; Matthews, P. C.; Cox, S. M.; King, J. R.
2011-12-01
We study homoclinic snaking in the cubic-quintic Swift-Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319-54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement.
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
The Dynamics of Power laws: Fitness and Aging in Preferential Attachment Trees
NASA Astrophysics Data System (ADS)
Garavaglia, Alessandro; van der Hofstad, Remco; Woeginger, Gerhard
2017-09-01
Continuous-time branching processes describe the evolution of a population whose individuals generate a random number of children according to a birth process. Such branching processes can be used to understand preferential attachment models in which the birth rates are linear functions. We are motivated by citation networks, where power-law citation counts are observed as well as aging in the citation patterns. To model this, we introduce fitness and age-dependence in these birth processes. The multiplicative fitness moderates the rate at which children are born, while the aging is integrable, so that individuals receives a finite number of children in their lifetime. We show the existence of a limiting degree distribution for such processes. In the preferential attachment case, where fitness and aging are absent, this limiting degree distribution is known to have power-law tails. We show that the limiting degree distribution has exponential tails for bounded fitnesses in the presence of integrable aging, while the power-law tail is restored when integrable aging is combined with fitness with unbounded support with at most exponential tails. In the absence of integrable aging, such processes are explosive.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier
2016-01-01
Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
A statistical study of decaying kink oscillations detected using SDO/AIA
NASA Astrophysics Data System (ADS)
Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.
2016-01-01
Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya
2007-03-01
We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
Alternative analytical forms to model diatomic systems based on the deformed exponential function.
da Fonsêca, José Erinaldo; de Oliveira, Heibbe Cristhian B; da Cunha, Wiliam Ferreira; Gargano, Ricardo
2014-07-01
Using a deformed exponential function and the molecular-orbital theory for the simplest molecular ion, two new analytical functions are proposed to represent the potential energy of ground-state diatomic systems. The quality of these new forms was tested by fitting the ab initio electronic energies of the system LiH, LiNa, NaH, RbH, KH, H2, Li2, K2, H 2 (+) , BeH(+) and Li 2 (+) . From these fits, it was verified that these new proposals are able to adequately describe homonuclear, heteronuclear and cationic diatomic systems with good accuracy. Vibrational spectroscopic constant results obtained from these two proposals are in good agreement with experimental data.
Deadline rush: a time management phenomenon and its mathematical description.
König, Cornelius J; Kleinmann, Martin
2005-01-01
A typical time management phenomenon is the rush before a deadline. Behavioral decision making research can be used to predict how behavior changes before a deadline. People are likely not to work on a project with a deadline in the far future because they generally discount future outcomes. Only when the deadline is close are people likely to work. On the basis of recent intertemporal choice experiments, the authors argue that a hyperbolic function should provide a more accurate description of the deadline rush than an exponential function predicted by an economic model of discounted utility. To show this, the fit of the hyperbolic and the exponential function were compared with data sets that describe when students study for exams. As predicted, the hyperbolic function fit the data significantly better than the exponential function. The implication for time management decisions is that they are most likely to be inconsistent over time (i.e., people make a plan how to use their time but do not follow it).
An approximation method for improving dynamic network model fitting.
Carnegie, Nicole Bohme; Krivitsky, Pavel N; Hunter, David R; Goodreau, Steven M
There has been a great deal of interest recently in the modeling and simulation of dynamic networks, i.e., networks that change over time. One promising model is the separable temporal exponential-family random graph model (ERGM) of Krivitsky and Handcock, which treats the formation and dissolution of ties in parallel at each time step as independent ERGMs. However, the computational cost of fitting these models can be substantial, particularly for large, sparse networks. Fitting cross-sectional models for observations of a network at a single point in time, while still a non-negligible computational burden, is much easier. This paper examines model fitting when the available data consist of independent measures of cross-sectional network structure and the duration of relationships under the assumption of stationarity. We introduce a simple approximation to the dynamic parameters for sparse networks with relationships of moderate or long duration and show that the approximation method works best in precisely those cases where parameter estimation is most likely to fail-networks with very little change at each time step. We consider a variety of cases: Bernoulli formation and dissolution of ties, independent-tie formation and Bernoulli dissolution, independent-tie formation and dissolution, and dependent-tie formation models.
Extracting the exponential behaviors in the market data
NASA Astrophysics Data System (ADS)
Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako
2007-08-01
We introduce a mathematical criterion defining the bubbles or the crashes in financial market price fluctuations by considering exponential fitting of the given data. By applying this criterion we can automatically extract the periods in which bubbles and crashes are identified. From stock market data of so-called the Internet bubbles it is found that the characteristic length of bubble period is about 100 days.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
Feng, Zhaoyan; Min, Xiangde; Margolis, Daniel J. A.; Duan, Caohui; Chen, Yuping; Sah, Vivek Kumar; Chaudhary, Nabin; Li, Basen; Ke, Zan; Zhang, Peipei; Wang, Liang
2017-01-01
Objectives To evaluate the diagnostic performance of different mathematical models and different b-value ranges of diffusion-weighted imaging (DWI) in peripheral zone prostate cancer (PZ PCa) detection. Methods Fifty-six patients with histologically proven PZ PCa who underwent DWI-magnetic resonance imaging (MRI) using 21 b-values (0–4500 s/mm2) were included. The mean signal intensities of the regions of interest (ROIs) placed in benign PZs and cancerous tissues on DWI images were fitted using mono-exponential, bi-exponential, stretched-exponential, and kurtosis models. The b-values were divided into four ranges: 0–1000, 0–2000, 0–3200, and 0–4500 s/mm2, grouped as A, B, C, and D, respectively. ADC,
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
When growth models are not universal: evidence from marine invertebrates
Hirst, Andrew G.; Forster, Jack
2013-01-01
The accumulation of body mass, as growth, is fundamental to all organisms. Being able to understand which model(s) best describe this growth trajectory, both empirically and ultimately mechanistically, is an important challenge. A variety of equations have been proposed to describe growth during ontogeny. Recently, the West Brown Enquist (WBE) equation, formulated as part of the metabolic theory of ecology, has been proposed as a universal model of growth. This equation has the advantage of having a biological basis, but its ability to describe invertebrate growth patterns has not been well tested against other, more simple models. In this study, we collected data for 58 species of marine invertebrate from 15 different taxa. The data were fitted to three growth models (power, exponential and WBE), and their abilities were examined using an information theoretic approach. Using Akaike information criteria, we found changes in mass through time to fit an exponential equation form best (in approx. 73% of cases). The WBE model predominantly overestimates body size in early ontogeny and underestimates it in later ontogeny; it was the best fit in approximately 14% of cases. The exponential model described growth well in nine taxa, whereas the WBE described growth well in one of the 15 taxa, the Amphipoda. Although the WBE has the advantage of being developed with an underlying proximate mechanism, it provides a poor fit to the majority of marine invertebrates examined here, including species with determinate and indeterminate growth types. In the original formulation of the WBE model, it was tested almost exclusively against vertebrates, to which it fitted well; the model does not however appear to be universal given its poor ability to describe growth in benthic or pelagic marine invertebrates. PMID:23945691
Large and small-scale structures and the dust energy balance problem in spiral galaxies
NASA Astrophysics Data System (ADS)
Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.
2015-04-01
The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.
Klein, F.W.; Wright, Tim
2008-01-01
The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan
2017-01-01
Abstract Background Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. Methods We used the previously described “incidence decay with exponential adjustment” (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015–2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. Results The 2015–2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R0 approximately 1.4 for all fits). Lower R0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. Conclusions A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance. PMID:29497629
The dynamics of adapting, unregulated populations and a modified fundamental theorem.
O'Dwyer, James P
2013-01-06
A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.
Obstructive sleep apnea alters sleep stage transition dynamics.
Bianchi, Matt T; Cash, Sydney S; Mietus, Joseph; Peng, Chung-Kang; Thomas, Robert
2010-06-28
Enhanced characterization of sleep architecture, compared with routine polysomnographic metrics such as stage percentages and sleep efficiency, may improve the predictive phenotyping of fragmented sleep. One approach involves using stage transition analysis to characterize sleep continuity. We analyzed hypnograms from Sleep Heart Health Study (SHHS) participants using the following stage designations: wake after sleep onset (WASO), non-rapid eye movement (NREM) sleep, and REM sleep. We show that individual patient hypnograms contain insufficient number of bouts to adequately describe the transition kinetics, necessitating pooling of data. We compared a control group of individuals free of medications, obstructive sleep apnea (OSA), medical co-morbidities, or sleepiness (n = 374) with mild (n = 496) or severe OSA (n = 338). WASO, REM sleep, and NREM sleep bout durations exhibited multi-exponential temporal dynamics. The presence of OSA accelerated the "decay" rate of NREM and REM sleep bouts, resulting in instability manifesting as shorter bouts and increased number of stage transitions. For WASO bouts, previously attributed to a power law process, a multi-exponential decay described the data well. Simulations demonstrated that a multi-exponential process can mimic a power law distribution. OSA alters sleep architecture dynamics by decreasing the temporal stability of NREM and REM sleep bouts. Multi-exponential fitting is superior to routine mono-exponential fitting, and may thus provide improved predictive metrics of sleep continuity. However, because a single night of sleep contains insufficient transitions to characterize these dynamics, extended monitoring of sleep, probably at home, would be necessary for individualized clinical application.
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
Model of flare lightcurve profile observed in soft X-rays
NASA Astrophysics Data System (ADS)
Gryciuk, Magdalena; Siarkowski, Marek; Gburek, Szymon; Podgorski, Piotr; Sylwester, Janusz; Kepa, Anna; Mrozek, Tomasz
We propose a new model for description of solar flare lightcurve profile observed in soft X-rays. The method assumes that single-peaked `regular' flares seen in lightcurves can be fitted with the elementary time profile being a convolution of Gaussian and exponential functions. More complex, multi-peaked flares can be decomposed as a sum of elementary profiles. During flare lightcurve fitting process a linear background is determined as well. In our study we allow the background shape over the event to change linearly with time. Presented approach originally was dedicated to the soft X-ray small flares recorded by Polish spectrophotometer SphinX during the phase of very deep solar minimum of activity, between 23 rd and 24 th Solar Cycles. However, the method can and will be used to interpret the lightcurves as obtained by the other soft X-ray broad-band spectrometers at the time of both low and higher solar activity level. In the paper we introduce the model and present examples of fits to SphinX and GOES 1-8 Å channel observations as well.
[Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].
Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo
2009-11-01
The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
NASA Astrophysics Data System (ADS)
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
A Relevance Vector Machine-Based Approach with Application to Oil Sand Pump Prognostics
Hu, Jinfei; Tse, Peter W.
2013-01-01
Oil sand pumps are widely used in the mining industry for the delivery of mixtures of abrasive solids and liquids. Because they operate under highly adverse conditions, these pumps usually experience significant wear. Consequently, equipment owners are quite often forced to invest substantially in system maintenance to avoid unscheduled downtime. In this study, an approach combining relevance vector machines (RVMs) with a sum of two exponential functions was developed to predict the remaining useful life (RUL) of field pump impellers. To handle field vibration data, a novel feature extracting process was proposed to arrive at a feature varying with the development of damage in the pump impellers. A case study involving two field datasets demonstrated the effectiveness of the developed method. Compared with standalone exponential fitting, the proposed RVM-based model was much better able to predict the remaining useful life of pump impellers. PMID:24051527
Edge Extraction by an Exponential Function Considering X-ray Transmission Characteristics
NASA Astrophysics Data System (ADS)
Kim, Jong Hyeong; Youp Synn, Sang; Cho, Sung Man; Jong Joo, Won
2011-04-01
3-D radiographic methodology has been into the spotlight for quality inspection of mass product or in-service inspection of aging product. To locate a target object in 3-D space, its characteristic contours such as edge length, edge angle, and vertices are very important. In spite of a simple geometry product, it is very difficult to get clear shape contours from a single radiographic image. The image contains scattering noise at the edges and ambiguity coming from X-Ray absorption within the body. This article suggests a concise method to extract whole edges from a single X-ray image. At the edge point of the object, the intensity of the X-ray decays exponentially as the X-ray penetrates the object. Considering this X-Ray decaying property, edges are extracted by using the least square fitting with the control of Coefficient of Determination.
A relevance vector machine-based approach with application to oil sand pump prognostics.
Hu, Jinfei; Tse, Peter W
2013-09-18
Oil sand pumps are widely used in the mining industry for the delivery of mixtures of abrasive solids and liquids. Because they operate under highly adverse conditions, these pumps usually experience significant wear. Consequently, equipment owners are quite often forced to invest substantially in system maintenance to avoid unscheduled downtime. In this study, an approach combining relevance vector machines (RVMs) with a sum of two exponential functions was developed to predict the remaining useful life (RUL) of field pump impellers. To handle field vibration data, a novel feature extracting process was proposed to arrive at a feature varying with the development of damage in the pump impellers. A case study involving two field datasets demonstrated the effectiveness of the developed method. Compared with standalone exponential fitting, the proposed RVM-based model was much better able to predict the remaining useful life of pump impellers.
Universality Classes of Interaction Structures for NK Fitness Landscapes
NASA Astrophysics Data System (ADS)
Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim
2018-07-01
Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.
Universality Classes of Interaction Structures for NK Fitness Landscapes
NASA Astrophysics Data System (ADS)
Hwang, Sungmin; Schmiegelt, Benjamin; Ferretti, Luca; Krug, Joachim
2018-02-01
Kauffman's NK-model is a paradigmatic example of a class of stochastic models of genotypic fitness landscapes that aim to capture generic features of epistatic interactions in multilocus systems. Genotypes are represented as sequences of L binary loci. The fitness assigned to a genotype is a sum of contributions, each of which is a random function defined on a subset of k ≤ L loci. These subsets or neighborhoods determine the genetic interactions of the model. Whereas earlier work on the NK model suggested that most of its properties are robust with regard to the choice of neighborhoods, recent work has revealed an important and sometimes counter-intuitive influence of the interaction structure on the properties of NK fitness landscapes. Here we review these developments and present new results concerning the number of local fitness maxima and the statistics of selectively accessible (that is, fitness-monotonic) mutational pathways. In particular, we develop a unified framework for computing the exponential growth rate of the expected number of local fitness maxima as a function of L, and identify two different universality classes of interaction structures that display different asymptotics of this quantity for large k. Moreover, we show that the probability that the fitness landscape can be traversed along an accessible path decreases exponentially in L for a large class of interaction structures that we characterize as locally bounded. Finally, we discuss the impact of the NK interaction structures on the dynamics of evolution using adaptive walk models.
In vivo chlorine and sodium MRI of rat brain at 21.1 T.
Schepkin, Victor D; Elumalai, Malathy; Kitchen, Jason A; Qian, Chunqi; Gor'kov, Peter L; Brey, William W
2014-02-01
MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. MRI of (35)Cl and (23)Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. T 1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T 1a = 4.8 ms (0.7) T 1b = 24.4 ± 7 ms (0.3) and compared with sodium (T 1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of [Formula: see text] ms and [Formula: see text] ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression.
Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M
2013-02-01
A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.
Population models of burrowing mayfly recolonization in Western Lake Erie
Madenjian, C.P.; Schloesser, D.W.; Krieger, K.A.
1998-01-01
Burrowing mayflies, Hexagenia spp. (H. limbata and H. rigida), began recolonizing western Lake Erie during the 1990s. Survey data for mayfly nymph densities indicated that the population experienced exponential growth between 1991 and 1997. To predict the time to full recovery of the mayfly population, we fitted logistic models, ranging in carrying capacity from 600 to 2000 nymphs/m2, to these survey data. Based on the fitted logistic curves, we forecast that the mayfly population in western Lake Erie would achieve full recovery between years 1998 and 2000, depending on the carrying capacity of the western basin. Additionally, we estimated the mortality rate of nymphs in western Lake Erie during 1994 and then applied an age-based matrix model to the mayfly population. The results of the matrix population modeling corroborated the exponential growth model application in that both methods yielded an estimate of the population growth rate, r, in excess of 0.8 yr-1. This was the first evidence that mayfly populations are capable of recolonizing large aquatic ecosystems at rates comparable with those observed in much smaller lentic ecosystems. Our model predictions should prove valuable to managers of power plant facilities along the western basin in planning for mayfly emergences and to managers of the yellow perch (Perca flavescens) fishery in western Lake Erie.
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
On the Time Scale of Nocturnal Boundary Layer Cooling in Valleys and Basins and over Plains
NASA Astrophysics Data System (ADS)
de Wekker, Stephan F. J.; Whiteman, C. David
2006-06-01
Sequences of vertical temperature soundings over flat plains and in a variety of valleys and basins of different sizes and shapes were used to determine cooling-time-scale characteristics in the nocturnal stable boundary layer under clear, undisturbed weather conditions. An exponential function predicts the cumulative boundary layer cooling well. The fitting parameter or time constant in the exponential function characterizes the cooling of the valley atmosphere and is equal to the time required for the cumulative cooling to attain 63.2% of its total nighttime value. The exponential fit finds time constants varying between 3 and 8 h. Calculated time constants are smallest in basins, are largest over plains, and are intermediate in valleys. Time constants were also calculated from air temperature measurements made at various heights on the sidewalls of a small basin. The variation with height of the time constant exhibited a characteristic parabolic shape in which the smallest time constants occurred near the basin floor and on the upper sidewalls of the basin where cooling was governed by cold-air drainage and radiative heat loss, respectively.
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
Airspace Dimension Assessment with nanoparticles reflects lung density as quantified by MRI
Jakobsson, Jonas K; Löndahl, Jakob; Olsson, Lars E; Diaz, Sandra; Zackrisson, Sophia; Wollmer, Per
2018-01-01
Background Airspace Dimension Assessment with inhaled nanoparticles is a novel method to determine distal airway morphology. This is the first empirical study using Airspace Dimension Assessment with nanoparticles (AiDA) to estimate distal airspace radius. The technology is relatively simple and potentially accessible in clinical outpatient settings. Method Nineteen never-smoking volunteers performed nanoparticle inhalation tests at multiple breath-hold times, and the difference in nanoparticle concentration of inhaled and exhaled gas was measured. An exponential decay curve was fitted to the concentration of recovered nanoparticles, and airspace dimensions were assessed from the half-life of the decay. Pulmonary tissue density was measured using magnetic resonance imaging (MRI). Results The distal airspace radius measured by AiDA correlated with lung tissue density as measured by MRI (ρ = −0.584; p = 0.0086). The linear intercept of the logarithm of the exponential decay curve correlated with forced expiratory volume in one second (FEV1) (ρ = 0.549; p = 0.0149). Conclusion The AiDA method shows potential to be developed into a tool to assess conditions involving changes in distal airways, eg, emphysema. The intercept may reflect airway properties; this finding should be further investigated.
Wu, Yao; Dai, Xiaodong; Huang, Niu; Zhao, Lifeng
2013-06-05
In force field parameter development using ab initio potential energy surfaces (PES) as target data, an important but often neglected matter is the lack of a weighting scheme with optimal discrimination power to fit the target data. Here, we developed a novel partition function-based weighting scheme, which not only fits the target potential energies exponentially like the general Boltzmann weighting method, but also reduces the effect of fitting errors leading to overfitting. The van der Waals (vdW) parameters of benzene and propane were reparameterized by using the new weighting scheme to fit the high-level ab initio PESs probed by a water molecule in global configurational space. The molecular simulation results indicate that the newly derived parameters are capable of reproducing experimental properties in a broader range of temperatures, which supports the partition function-based weighting scheme. Our simulation results also suggest that structural properties are more sensitive to vdW parameters than partial atomic charge parameters in these systems although the electrostatic interactions are still important in energetic properties. As no prerequisite conditions are required, the partition function-based weighting method may be applied in developing any types of force field parameters. Copyright © 2013 Wiley Periodicals, Inc.
Wyllie, David J A; Béhé, Philippe; Colquhoun, David
1998-01-01
We have expressed recombinant NR1a/NR2A and NR1a/NR2D N-methyl-D-aspartate (NMDA) receptor channels in Xenopus oocytes and made recordings of single-channel and macroscopic currents in outside-out membrane patches. For each receptor type we measured (a) the individual single-channel activations evoked by low glutamate concentrations in steady-state recordings, and (b) the macroscopic responses elicited by brief concentration jumps with high agonist concentrations, and we explore the relationship between these two sorts of observation. Low concentration (5–100 nM) steady-state recordings of NR1a/NR2A and NR1a/NR2D single-channel activity generated shut-time distributions that were best fitted with a mixture of five and six exponential components, respectively. Individual activations of either receptor type were resolved as bursts of openings, which we refer to as ‘super-clusters’. During a single activation, NR1a/NR2A receptors were open for 36 % of the time, but NR1a/NR2D receptors were open for only 4 % of the time. For both, distributions of super-cluster durations were best fitted with a mixture of six exponential components. Their overall mean durations were 35.8 and 1602 ms, respectively. Steady-state super-clusters were aligned on their first openings and averaged. The average was well fitted by a sum of exponentials with time constants taken from fits to super-cluster length distributions. It is shown that this is what would be expected for a channel that shows simple Markovian behaviour. The current through NR1a/NR2A channels following a concentration jump from zero to 1 mM glutamate for 1 ms was well fitted by three exponential components with time constants of 13 ms (rising phase), 70 ms and 350 ms (decaying phase). Similar concentration jumps on NR1a/NR2D channels were well fitted by two exponentials with means of 45 ms (rising phase) and 4408 ms (decaying phase) components. During prolonged exposure to glutamate, NR1a/NR2A channels desensitized with a time constant of 649 ms, while NR1a/NR2D channels exhibited no apparent desensitization. We show that under certain conditions, the time constants for the macroscopic jump response should be the same as those for the distribution of super-cluster lengths, though the resolution of the latter is so much greater that it cannot be expected that all the components will be resolvable in a macroscopic current. Good agreement was found for jumps on NR1a/NR2D receptors, and for some jump experiments on NR1a/NR2A. However, the latter were rather variable and some were slower than predicted. Slow decays were associated with patches that had large currents. PMID:9625862
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress rate and preload testing at ambient and elevated temperatures. The data fit to the relation of strength versus the log of the stress rate was very reasonable for most of the materials. Also, the preloading technique was determined equally applicable to the case of slow-crack-growth (SCG) parameter n greater than 30 for both the power-law and exponential formulations. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important SCG parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Abe, Sumiyoshi
2002-10-01
The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.
NASA Astrophysics Data System (ADS)
Dhariwal, Rohit; Bragg, Andrew D.
2018-03-01
In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.
Biological growth functions describe published site index curves for Lake States timber species.
Allen L. Lundgren; William A. Dolid
1970-01-01
Two biological growth functions, an exponential-monomolecular function and a simple monomolecular function, have been fit to published site index curves for 11 Lake States tree species: red, jack, and white pine, balsam fir, white and black spruce, tamarack, white-cedar, aspen, red oak, and paper birch. Both functions closely fit all published curves except those for...
Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie
2017-12-01
The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Welker, Jean Edward
1991-01-01
Since the invention of maximum and minimum thermometers in the 18th century, diurnal temperature extrema have been taken for air worldwide. At some stations, these extrema temperatures were collected at various soil depths also, and the behavior of these temperatures at a 10-cm depth at the Tifton Experimental Station in Georgia is presented. After a precipitation cooling event, the diurnal temperature maxima drop to a minimum value and then start a recovery to higher values (similar to thermal inertia). This recovery represents a measure of response to heating as a function of soil moisture and soil property. Eight different curves were fitted to a wide variety of data sets for different stations and years, and both power and exponential curves were fitted to a wide variety of data sets for different stations and years. Both power and exponential curve fits were consistently found to be statistically accurate least-square fit representations of the raw data recovery values. The predictive procedures used here were multivariate regression analyses, which are applicable to soils at a variety of depths besides the 10-cm depth presented.
PREdator: a python based GUI for data analysis, evaluation and fitting
2014-01-01
The analysis of a series of experimental data is an essential procedure in virtually every field of research. The information contained in the data is extracted by fitting the experimental data to a mathematical model. The type of the mathematical model (linear, exponential, logarithmic, etc.) reflects the physical laws that underlie the experimental data. Here, we aim to provide a readily accessible, user-friendly python script for data analysis, evaluation and fitting. PREdator is presented at the example of NMR paramagnetic relaxation enhancement analysis.
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Gawiser, Eric
2017-06-01
The Dense Basis SED fitting method reveals previously inaccessible information about the number and duration of star formation episodes and the timing of stellar mass assembly as well as uncertainties in these quantities, in addition to accurately recovering traditional SED parameters including M*, SFR and dust attenuation. This is done using basis Star Formation Histories (SFHs) chosen by comparing the goodness-of-fit of mock galaxy SEDs to the goodness-of-reconstruction of their SFHs, trained and validated using three independent datasets of mock galaxies at z=1 from SAMs, Hydrodynamic simulations and stochastic realizations. Of the six parametrizations of SFHs considered, we reject the traditional parametrizations of constant and exponential SFHs and suggest four novel improvements, quantifying the bias and scatter of each parametrization. We then apply the method to a sample of 1100 CANDELS GOODS-S galaxies at 1
Bentzley, Brandon S.; Fender, Kimberly M.; Aston-Jones, Gary
2012-01-01
Rationale Behavioral-economic demand curve analysis offers several useful measures of drug self-administration. Although generation of demand curves previously required multiple days, recent within-session procedures allow curve construction from a single 110-min cocaine self-administration session, making behavioral-economic analyses available to a broad range of self-administration experiments. However, a mathematical approach of curve fitting has not been reported for the within-session threshold procedure. Objectives We review demand curve analysis in drug self-administration experiments and provide a quantitative method for fitting curves to single-session data that incorporates relative stability of brain drug concentration. Methods Sprague-Dawley rats were trained to self-administer cocaine, and then tested with the threshold procedure in which the cocaine dose was sequentially decreased on a fixed ratio-1 schedule. Price points (responses/mg cocaine) outside of relatively stable brain cocaine concentrations were removed before curves were fit. Curve-fit accuracy was determined by the degree of correlation between graphical and calculated parameters for cocaine consumption at low price (Q0) and the price at which maximal responding occurred (Pmax). Results Removing price points that occurred at relatively unstable brain cocaine concentrations generated precise estimates of Q0 and resulted in Pmax values with significantly closer agreement with graphical Pmax than conventional methods. Conclusion The exponential demand equation can be fit to single-session data using the threshold procedure for cocaine self-administration. Removing data points that occur during relatively unstable brain cocaine concentrations resulted in more accurate estimates of demand curve slope than graphical methods, permitting a more comprehensive analysis of drug self-administration via a behavioral-economic framework. PMID:23086021
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Comparing alkaline and thermal disintegration characteristics for mechanically dewatered sludge.
Tunçal, Tolga
2011-10-01
Thermal drying is one of the advanced technologies ultimately providing an alternative method of sludge disposal. In this study, the drying kinetics of mechanically dewatered sludge (MDS) after alkaline and thermal disintegration have been studied. In addition, the effect of total organic carbon (TOC) on specific resistance to filtration and sludge bound water content were also investigated on freshly collected sludge samples. The combined effect of pH and TOC on the thermal sludge drying rate for MDS was modelled using the two-factorial experimental design method. Statistical assessment of the obtained results proposed that sludge drying potential has increased exponentially for both increasing temperature and lime dosage. Execution of curve fitting algorithms also implied that drying profiles for raw and alkaline-disintegrated sludge were well fitted to the Henderson and Pabis model. The activation energy of MDS decreased from 28.716 to 11.390 kJ mol(-1) after disintegration. Consequently, the unit power requirement for thermal drying decreased remarkably from 706 to 281 W g(-1) H2O.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
Basis convergence of range-separated density-functional theory.
Franck, Odile; Mussard, Bastien; Luppi, Eleonora; Toulouse, Julien
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. We study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N2, and H2O) with cardinal number X of the Dunning basis sets cc - p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.
ERIC Educational Resources Information Center
Wolf, Walter A., Ed.
1976-01-01
Presents three activities: (1) the investigation of the purity and stability of nicotinamide and flavin coenzymes; (2) desk-computer fitting of a two-exponential function; and (3) an interesting and inexpensive solubility product experiment for introductory chemistry. (RH)
Treatment of late time instabilities in finite-difference EMP scattering codes
NASA Astrophysics Data System (ADS)
Simpson, L. T.; Holland, R.; Arman, S.
1982-12-01
Constraints applicable to a finite difference mesh for solution of Maxwell's equations are defined. The equations are applied in the time domain for computing electromagnetic coupling to complex structures, e.g., rectangular, cylindrical, or spherical. In a spatially varying grid, the amplitude growth of high frequency waves becomes exponential through multiple reflections from the outer boundary in cases of late-time solution. The exponential growth of the numerical noise exceeds the value of the real signal. The correction technique employs an absorbing surface and a radiating boundary, along with tailored selection of the grid mesh size. High frequency noise is removed through use of a low-pass digital filter, a linear least squares fit is made to thy low frequency filtered response, and the original, filtered, and fitted data are merged to preserve the high frequency early-time response.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
Gutiérrez-Juárez, G; Vargas-Luna, M; Córdova, T; Varela, J B; Bernal-Alvarado, J J; Sosa, M
2002-08-01
A photoacoustic technique is used for studying topically applied substance absorption in human skin. The proposed method utilizes a double-chamber PA cell. The absorption determination was obtained through the measurement of the thermal effusivity of the binary system substance-skin. The theoretical model assumes that the effective thermal effusivity of the binary system corresponds to that of a two-phase system. Experimental applications of the method employed different substances of topical application in different parts of the body of a volunteer. The method is demonstrated to be an easily used non-invasive technique for dermatology research. The relative concentrations as a function of time of substances such as ketoconazol and sunscreen were determined by fitting a sigmoidal function to the data, while an exponential function corresponds to the best fit for the set of data for nitrofurazona, vaseline and vaporub. The time constants associated with the rates of absorption, were found to vary in the range between 10 and 58 min, depending on the substance and the part of the body.
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A.; Ono, Yutaka
2016-01-01
Background Several studies have shown that total depressive symptom scores in the general population approximate an exponential pattern, except for the lower end of the distribution. The Center for Epidemiologic Studies Depression Scale (CES-D) consists of 20 items, each of which may take on four scores: “rarely,” “some,” “occasionally,” and “most of the time.” Recently, we reported that the item responses for 16 negative affect items commonly exhibit exponential patterns, except for the level of “rarely,” leading us to hypothesize that the item responses at the level of “rarely” may be related to the non-exponential pattern typical of the lower end of the distribution. To verify this hypothesis, we investigated how the item responses contribute to the distribution of the sum of the item scores. Methods Data collected from 21,040 subjects who had completed the CES-D questionnaire as part of a Japanese national survey were analyzed. To assess the item responses of negative affect items, we used a parameter r, which denotes the ratio of “rarely” to “some” in each item response. The distributions of the sum of negative affect items in various combinations were analyzed using log-normal scales and curve fitting. Results The sum of the item scores approximated an exponential pattern regardless of the combination of items, whereas, at the lower end of the distributions, there was a clear divergence between the actual data and the predicted exponential pattern. At the lower end of the distributions, the sum of the item scores with high values of r exhibited higher scores compared to those predicted from the exponential pattern, whereas the sum of the item scores with low values of r exhibited lower scores compared to those predicted. Conclusions The distributional pattern of the sum of the item scores could be predicted from the item responses of such items. PMID:27806132
Atmospheric monitoring in MAGIC and data corrections
NASA Astrophysics Data System (ADS)
Fruck, Christian; Gaug, Markus
2015-03-01
A method for analyzing returns of a custom-made "micro"-LIDAR system, operated alongside the two MAGIC telescopes is presented. This method allows for calculating the transmission through the atmospheric boundary layer as well as thin cloud layers. This is achieved by applying exponential fits to regions of the back-scattering signal that are dominated by Rayleigh scattering. Making this real-time transmission information available for the MAGIC data stream allows to apply atmospheric corrections later on in the analysis. Such corrections allow for extending the effective observation time of MAGIC by including data taken under adverse atmospheric conditions. In the future they will help reducing the systematic uncertainties of energy and flux.
Nordez, Antoine; Cornu, Christophe; McNair, Peter
2006-08-01
The aim of this study was to assess the effects of static stretching on hamstring passive stiffness calculated using different data reduction methods. Subjects performed a maximal range of motion test, five cyclic stretching repetitions and a static stretching intervention that involved five 30-s static stretches. A computerised dynamometer allowed the measurement of torque and range of motion during passive knee extension. Stiffness was then calculated as the slope of the torque-angle relationship fitted using a second-order polynomial, a fourth-order polynomial, and an exponential model. The second-order polynomial and exponential models allowed the calculation of stiffness indices normalized to knee angle and passive torque, respectively. Prior to static stretching, stiffness levels were significantly different across the models. After stretching, while knee maximal joint range of motion increased, stiffness was shown to decrease. Stiffness decreased more at the extended knee joint angle, and the magnitude of change depended upon the model used. After stretching, the stiffness indices also varied according to the model used to fit data. Thus, the stiffness index normalized to knee angle was found to decrease whereas the stiffness index normalized to passive torque increased after static stretching. Stretching has significant effects on stiffness, but the findings highlight the need to carefully assess the effect of different models when analyzing such data.
Statistical mechanics of money and income
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian; Yakovenko, Victor
2001-03-01
Money: In a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money will assume the exponential Boltzmann-Gibbs form characterized by an effective temperature. We demonstrate how the Boltzmann-Gibbs distribution emerges in computer simulations of economic models. We discuss thermal machines, the role of debt, and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold. Reference: A. Dragulescu and V. M. Yakovenko, "Statistical mechanics of money", Eur. Phys. J. B 17, 723-729 (2000), [cond-mat/0001432]. Income: Using tax and census data, we demonstrate that the distribution of individual income in the United States is exponential. Our calculated Lorenz curve without fitting parameters and Gini coefficient 1/2 agree well with the data. We derive the distribution function of income for families with two earners and show that it also agrees well with the data. The family data for the period 1947-1994 fit the Lorenz curve and Gini coefficient 3/8=0.375 calculated for two-earners families. Reference: A. Dragulescu and V. M. Yakovenko, "Evidence for the exponential distribution of income in the USA", cond-mat/0008305.
Damping profile of standing kink oscillations observed by SDO/AIA
NASA Astrophysics Data System (ADS)
Pascoe, D. J.; Goddard, C. R.; Nisticò, G.; Anfinogentov, S.; Nakariakov, V. M.
2016-01-01
Aims: Strongly damped standing and propagating kink oscillations are observed in the solar corona. This can be understood in terms of mode coupling, which causes the wave energy to be converted from the bulk transverse oscillation to localised, unresolved azimuthal motions. The damping rate can provide information about the loop structure, and theory predicts two possible damping profiles. Methods: We used the recently compiled catalogue of decaying standing kink oscillations of coronal loops to search for examples with high spatial and temporal resolution and sufficient signal quality to allow the damping profile to be examined. The location of the loop axis was tracked, detrended, and fitted with sinusoidal oscillations with Gaussian and exponential damping profiles. Results: Using the highest quality data currently available, we find that for the majority of our cases a Gaussian profile describes the damping behaviour at least as well as an exponential profile, which is consistent with the recently developed theory for the damping profile due to mode coupling.
Calculation of Rate Spectra from Noisy Time Series Data
Voelz, Vincent A.; Pande, Vijay S.
2011-01-01
As the resolution of experiments to measure folding kinetics continues to improve, it has become imperative to avoid bias that may come with fitting data to a predetermined mechanistic model. Towards this end, we present a rate spectrum approach to analyze timescales present in kinetic data. Computing rate spectra of noisy time series data via numerical discrete inverse Laplace transform is an ill-conditioned inverse problem, so a regularization procedure must be used to perform the calculation. Here, we show the results of different regularization procedures applied to noisy multi-exponential and stretched exponential time series, as well as data from time-resolved folding kinetics experiments. In each case, the rate spectrum method recapitulates the relevant distribution of timescales present in the data, with different priors on the rate amplitudes naturally corresponding to common biases toward simple phenomenological models. These results suggest an attractive alternative to the “Occam’s razor” philosophy of simply choosing models with the fewest number of relaxation rates. PMID:22095854
NASA Astrophysics Data System (ADS)
Hollett, Joshua W.; Pegoretti, Nicholas
2018-04-01
Separate, one-parameter, on-top density functionals are derived for the short-range dynamic correlation between opposite and parallel-spin electrons, in which the electron-electron cusp is represented by an exponential function. The combination of both functionals is referred to as the Opposite-spin exponential-cusp and Fermi-hole correction (OF) functional. The two parameters of the OF functional are set by fitting the ionization energies and electron affinities, of the atoms He to Ar, predicted by ROHF in combination with the OF functional to the experimental values. For ionization energies, the overall performance of ROHF-OF is better than completely renormalized coupled-cluster [CR-CC(2,3)] and better than, or as good as, conventional density functional methods. For electron affinities, the overall performance of ROHF-OF is less impressive. However, for both ionization energies and electron affinities of third row atoms, the mean absolute error of ROHF-OF is only 3 kJ mol-1.
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-01
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue. PMID:25603180
Zhang, Ze-Wei; Wang, Hui; Qin, Qing-Hua
2015-01-16
A meshless numerical scheme combining the operator splitting method (OSM), the radial basis function (RBF) interpolation, and the method of fundamental solutions (MFS) is developed for solving transient nonlinear bioheat problems in two-dimensional (2D) skin tissues. In the numerical scheme, the nonlinearity caused by linear and exponential relationships of temperature-dependent blood perfusion rate (TDBPR) is taken into consideration. In the analysis, the OSM is used first to separate the Laplacian operator and the nonlinear source term, and then the second-order time-stepping schemes are employed for approximating two splitting operators to convert the original governing equation into a linear nonhomogeneous Helmholtz-type governing equation (NHGE) at each time step. Subsequently, the RBF interpolation and the MFS involving the fundamental solution of the Laplace equation are respectively employed to obtain approximated particular and homogeneous solutions of the nonhomogeneous Helmholtz-type governing equation. Finally, the full fields consisting of the particular and homogeneous solutions are enforced to fit the NHGE at interpolation points and the boundary conditions at boundary collocations for determining unknowns at each time step. The proposed method is verified by comparison of other methods. Furthermore, the sensitivity of the coefficients in the cases of a linear and an exponential relationship of TDBPR is investigated to reveal their bioheat effect on the skin tissue.
Spectral Modeling of the EGRET 3EG Gamma Ray Sources Near the Galactic Plane
NASA Technical Reports Server (NTRS)
Bertsch, D. L.; Hartman, R. C.; Hunter, S. D.; Thompson, D. J.; Lin, Y. C.; Kniffen, D. A.; Kanbach, G.; Mayer-Hasselwander, H. A.; Reimer, O.; Sreekumar, P.
1999-01-01
The third EGRET catalog lists 84 sources within 10 deg of the Galactic Plane. Five of these are well-known spin-powered pulsars, 2 and possibly 3 others are blazars, and the remaining 74 are classified as unidentified, although 6 of these are likely to be artifacts of nearby strong sources. Several of the remaining 68 unidentified sources have been noted as having positional agreement with supernovae remnants and OB associations. Others may be radio-quiet pulsars like Geminga, and still others may belong to a totally new class of sources. The question of the energy spectral distributions of these sources is an important clue to their identification. In this paper, the spectra of the sources within 10 deg of Galactic Plane are fit with three different functional forms; a single power law, two power laws, and a power law with an exponential cutoff. Where possible, the best fit is selected with statistical tests. Twelve, and possibly an additional 5 sources, are found to have spectra that are fit by a breaking power law or by the power law with exponential cutoff function.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
Zarr, Noah; Alexander, William H.; Brown, Joshua W.
2014-01-01
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. PMID:24639662
NASA Astrophysics Data System (ADS)
Féry, C.; Racine, B.; Vaufrey, D.; Doyeux, H.; Cinà, S.
2005-11-01
The main process responsible for the luminance degradation in organic light-emitting diodes (OLEDs) driven under constant current has not yet been identified. In this paper, we propose an approach to describe the intrinsic mechanisms involved in the OLED aging. We first show that a stretched exponential decay can be used to fit almost all the luminance versus time curves obtained under different driving conditions. In this way, we are able to prove that they can all be described by employing a single free parameter model. By using an approach based on local relaxation events, we will demonstrate that a single mechanism is responsible for the dominant aging process. Furthermore, we will demonstrate that the main relaxation event is the annihilation of one emissive center. We then use our model to fit all the experimental data measured under different driving condition, and show that by carefully fitting the accelerated luminance lifetime-curves, we can extrapolate the low-luminance lifetime needed for real display applications, with a high degree of accuracy.
Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment
NASA Astrophysics Data System (ADS)
Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok
2018-05-01
We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
LANDSAT-D investigations in snow hydrology
NASA Technical Reports Server (NTRS)
Dozier, J.
1983-01-01
The atmospheric radiative transfer calculation program (ATARD) and its supporting programs (setting up atmospheric profile, making Mie tables and an exponential-sum-fitting table) were completed. More sophisticated treatment of aerosol scattering (including angular phase function or asymmetric factor) and multichannel analysis of results from ATRAD are being developed. Some progress was made on a Monte Carlo program for examining two dimensional effects, specifically a surface boundary condition that varies across a scene. The MONTE program combines ATRAD and the Monte Carlo method together to produce an atmospheric point spread function. Currently the procedure passes monochromatic tests and the results are reasonable.
Direct measurement of cyclotron coherence times of high-mobility two-dimensional electron gases.
Wang, X; Hilton, D J; Reno, J L; Mittleman, D M; Kono, J
2010-06-07
We have observed long-lived (approximately 30 ps) coherent oscillations of charge carriers due to cyclotron resonance (CR) in high-mobility two-dimensional electrons in GaAs in perpendicular magnetic fields using time-domain terahertz spectroscopy. The observed coherent oscillations were fitted well by sinusoids with exponentially-decaying amplitudes, through which we were able to provide direct and precise measures for the decay times and oscillation frequencies simultaneously. This method thus overcomes the CR saturation effect, which is known to prevent determination of true CR linewidths in high-mobility electron systems using Fourier-transform infrared spectroscopy.
Aging Effects on Microstructure and Creep in Sn-3.8Ag-0.7Cu Solder
2007-09-01
demonstrated that the primary creep data for ball joints can be fitted well to exponential law. Fit parameters for the tests accomplished at 250C...MICROSTRUCTURE AND CREEP IN Sn-3.8Ag-0.7Cu SOLDER by Orlando Cornejo September 2007 Thesis Advisor: Indranath Dutta THIS PAGE...collection of information, including suggestions for reducing this burden, to Washington headquarters Services , Directorate for Information
Amplitude, Latency, and Peak Velocity in Accommodation and Disaccommodation Dynamics
Papadatou, Eleni; Ferrer-Blasco, Teresa; Montés-Micó, Robert
2017-01-01
The aim of this work was to ascertain whether there are differences in amplitude, latency, and peak velocity of accommodation and disaccommodation responses when different analysis strategies are used to compute them, such as fitting different functions to the responses or for smoothing them prior to computing the parameters. Accommodation and disaccommodation responses from four subjects to pulse changes in demand were recorded by means of aberrometry. Three different strategies were followed to analyze such responses: fitting an exponential function to the experimental data; fitting a Boltzmann sigmoid function to the data; and smoothing the data. Amplitude, latency, and peak velocity of the responses were extracted. Significant differences were found between the peak velocity in accommodation computed by fitting an exponential function and smoothing the experimental data (mean difference 2.36 D/s). Regarding disaccommodation, significant differences were found between latency and peak velocity, calculated with the two same strategies (mean difference of 0.15 s and −3.56 D/s, resp.). The strategy used to analyze accommodation and disaccommodation responses seems to affect the parameters that describe accommodation and disaccommodation dynamics. These results highlight the importance of choosing the most adequate analysis strategy in each individual to obtain the parameters that characterize accommodation and disaccommodation dynamics. PMID:29226128
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms ofmore » counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain {approx}10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson deviates. However, since there is a simple, analytical formula for the appropriate MLE measure for Poisson deviates, it is inexcusable that least squares estimators are used almost exclusively when fitting event counting histograms. There have been ways found to use successive non-linear least squares fitting to obtain similarly unbiased results, but this procedure is justified by simulation, must be re-tested when conditions change significantly, and requires two successive fits. There is a great need for a fitting routine for the MLE estimator for Poisson deviates that has convergence domains and rates comparable to the non-linear least squares L-M fitting. We show in this report that a simple way to achieve that goal is to use the L-M fitting procedure not to minimize the least squares measure, but the MLE for Poisson deviates.« less
Photoacoustic signal attenuation analysis for the assessment of thin layers thickness in paintings
NASA Astrophysics Data System (ADS)
Tserevelakis, George J.; Dal Fovo, Alice; Melessanaki, Krystalia; Fontana, Raffaella; Zacharakis, Giannis
2018-03-01
This study introduces a novel method for the thickness estimation of thin paint layers in works of art, based on photoacoustic signal attenuation analysis (PAcSAA). Ad hoc designed samples with acrylic paint layers (Primary Red Magenta, Cadmium Yellow, Ultramarine Blue) of various thicknesses on glass substrates were realized for the specific application. After characterization by Optical Coherence Tomography imaging, samples were irradiated at the back side using low energy nanosecond laser pulses of 532 nm wavelength. Photoacoustic waves undergo a frequency-dependent exponential attenuation through the paint layer, before being detected by a broadband ultrasonic transducer. Frequency analysis of the recorded time-domain signals allows for the estimation of the average transmitted frequency function, which shows an exponential decay with the layer thickness. Ultrasonic attenuation models were obtained for each pigment and used to fit the data acquired on an inhomogeneous painted mock-up simulating a real canvas painting. Thickness evaluation through PAcSAA resulted in excellent agreement with cross-section analysis with a conventional brightfield microscope. The results of the current study demonstrate the potential of the proposed PAcSAA method for the non-destructive stratigraphic analysis of painted artworks.
SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geneser, S; Paulsson, A; Sneed, P
Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to themore » thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.« less
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V
2007-02-01
Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
The probability distribution model of air pollution index and its dominants in Kuala Lumpur
NASA Astrophysics Data System (ADS)
AL-Dhurafi, Nasr Ahmed; Razali, Ahmad Mahir; Masseran, Nurulkamal; Zamzuri, Zamira Hasanah
2016-11-01
This paper focuses on the statistical modeling for the distributions of air pollution index (API) and its sub-indexes data observed at Kuala Lumpur in Malaysia. Five pollutants or sub-indexes are measured including, carbon monoxide (CO); sulphur dioxide (SO2); nitrogen dioxide (NO2), and; particulate matter (PM10). Four probability distributions are considered, namely log-normal, exponential, Gamma and Weibull in search for the best fit distribution to the Malaysian air pollutants data. In order to determine the best distribution for describing the air pollutants data, five goodness-of-fit criteria's are applied. This will help in minimizing the uncertainty in pollution resource estimates and improving the assessment phase of planning. The conflict in criterion results for selecting the best distribution was overcome by using the weight of ranks method. We found that the Gamma distribution is the best distribution for the majority of air pollutants data in Kuala Lumpur.
Hyperopic photorefractive keratectomy and central islands
NASA Astrophysics Data System (ADS)
Gobbi, Pier Giorgio; Carones, Francesco; Morico, Alessandro; Vigo, Luca; Brancato, Rosario
1998-06-01
We have evaluated the refractive evolution in patients treated with yhyperopic PRK to assess the extent of the initial overcorrection and the time constant of regression. To this end, the time history of the refractive error (i.e. the difference between achieved and intended refractive correction) has been fitted by means of an exponential statistical model, giving information characterizing the surgical procedure with a direct clinical meaning. Both hyperopic and myopic PRk procedures have been analyzed by this method. The analysis of the fitting model parameters shows that hyperopic PRK patients exhibit a definitely higher initial overcorrection than myopic ones, and a regression time constant which is much longer. A common mechanism is proposed to be responsible for the refractive outcomes in hyperopic treatments and in myopic patients exhibiting significant central islands. The interpretation is in terms of superhydration of the central cornea, and is based on a simple physical model evaluating the amount of centripetal compression in the apical cornea.
Not all nonnormal distributions are created equal: Improved theoretical and measurement precision.
Joo, Harry; Aguinis, Herman; Bradley, Kyle J
2017-07-01
We offer a four-category taxonomy of individual output distributions (i.e., distributions of cumulative results): (1) pure power law; (2) lognormal; (3) exponential tail (including exponential and power law with an exponential cutoff); and (4) symmetric or potentially symmetric (including normal, Poisson, and Weibull). The four categories are uniquely associated with mutually exclusive generative mechanisms: self-organized criticality, proportionate differentiation, incremental differentiation, and homogenization. We then introduce distribution pitting, a falsification-based method for comparing distributions to assess how well each one fits a given data set. In doing so, we also introduce decision rules to determine the likely dominant shape and generative mechanism among many that may operate concurrently. Next, we implement distribution pitting using 229 samples of individual output for several occupations (e.g., movie directors, writers, musicians, athletes, bank tellers, call center employees, grocery checkers, electrical fixture assemblers, and wirers). Results suggest that for 75% of our samples, exponential tail distributions and their generative mechanism (i.e., incremental differentiation) likely constitute the dominant distribution shape and explanation of nonnormally distributed individual output. This finding challenges past conclusions indicating the pervasiveness of other types of distributions and their generative mechanisms. Our results further contribute to theory by offering premises about the link between past and future individual output. For future research, our taxonomy and methodology can be used to pit distributions of other variables (e.g., organizational citizenship behaviors). Finally, we offer practical insights on how to increase overall individual output and produce more top performers. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Nocturnal Dynamics of Sleep-Wake Transitions in Patients With Narcolepsy.
Zhang, Xiaozhe; Kantelhardt, Jan W; Dong, Xiao Song; Krefting, Dagmar; Li, Jing; Yan, Han; Pillmann, Frank; Fietze, Ingo; Penzel, Thomas; Zhao, Long; Han, Fang
2017-02-01
We investigate how characteristics of sleep-wake dynamics in humans are modified by narcolepsy, a clinical condition that is supposed to destabilize sleep-wake regulation. Subjects with and without cataplexy are considered separately. Differences in sleep scoring habits as a possible confounder have been examined. Four groups of subjects are considered: narcolepsy patients from China with (n = 88) and without (n = 15) cataplexy, healthy controls from China (n = 110) and from Europe (n = 187, 2 nights each). After sleep-stage scoring and calculation of sleep characteristic parameters, the distributions of wake-episode durations and sleep-episode durations are determined for each group and fitted by power laws (exponent α) and by exponentials (decay time τ). We find that wake duration distributions are consistent with power laws for healthy subjects (China: α = 0.88, Europe: α = 1.02). Wake durations in all groups of narcolepsy patients, however, follow the exponential law (τ = 6.2-8.1 min). All sleep duration distributions are best fitted by exponentials on long time scales (τ = 34-82 min). We conclude that narcolepsy mainly alters the control of wake-episode durations but not sleep-episode durations, irrespective of cataplexy. Observed distributions of shortest wake and sleep durations suggest that differences in scoring habits regarding the scoring of short-term sleep stages may notably influence the fitting parameters but do not affect the main conclusion. © Sleep Research Society 2016. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Improving the Performance of the Prony Method Using a Wavelet Domain Filter for MRI Denoising
Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method. PMID:24834108
Improving the performance of the prony method using a wavelet domain filter for MRI denoising.
Jaramillo, Rodney; Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method.
NASA Astrophysics Data System (ADS)
Hu, Li; Zhao, Nanjing; Liu, Wenqing; Meng, Deshuo; Fang, Li; Wang, Yin; Yu, Yang; Ma, Mingjun
2015-08-01
Heavy metals in water can be deposited on graphite flakes, which can be used as an enrichment method for laser-induced breakdown spectroscopy (LIBS) and is studied in this paper. The graphite samples were prepared with an automatic device, which was composed of a loading and unloading module, a quantitatively adding solution module, a rapid heating and drying module and a precise rotating module. The experimental results showed that the sample preparation methods had no significant effect on sample distribution and the LIBS signal accumulated in 20 pulses was stable and repeatable. With an increasing amount of the sample solution on the graphite flake, the peak intensity at Cu I 324.75 nm accorded with the exponential function with a correlation coefficient of 0.9963 and the background intensity remained unchanged. The limit of detection (LOD) was calculated through linear fitting of the peak intensity versus the concentration. The LOD decreased rapidly with an increasing amount of sample solution until the amount exceeded 20 mL and the correlation coefficient of exponential function fitting was 0.991. The LOD of Pb, Ni, Cd, Cr and Zn after evaporating different amounts of sample solution on the graphite flakes was measured and the variation tendency of their LOD with sample solution amounts was similar to the tendency for Cu. The experimental data and conclusions could provide a reference for automatic sample preparation and heavy metal in situ detection. supported by National Natural Science Foundation of China (No. 60908018), National High Technology Research and Development Program of China (No. 2013AA065502) and Anhui Province Outstanding Youth Science Fund of China (No. 1108085J19)
Photoluminescence study of MBE grown InGaN with intentional indium segregation
NASA Astrophysics Data System (ADS)
Cheung, Maurice C.; Namkoong, Gon; Chen, Fei; Furis, Madalina; Pudavar, Haridas E.; Cartwright, Alexander N.; Doolittle, W. Alan
2005-05-01
Proper control of MBE growth conditions has yielded an In0.13Ga0.87N thin film sample with emission consistent with In-segregation. The photoluminescence (PL) from this epilayer showed multiple emission components. Moreover, temperature and power dependent studies of the PL demonstrated that two of the components were excitonic in nature and consistent with indium phase separation. At 15 K, time resolved PL showed a non-exponential PL decay that was well fitted with the stretched exponential solution expected for disordered systems. Consistent with the assumed carrier hopping mechanism of this model, the effective lifetime, , and the stretched exponential parameter, , decrease with increasing emission energy. Finally, room temperature micro-PL using a confocal microscope showed spatial clustering of low energy emission.
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Gyekenyesi, John P.
2002-01-01
The life prediction analysis based on an exponential crack velocity formulation was examined using a variety of experimental data on glass and advanced structural ceramics in constant stress-rate ("dynamic fatigue") and preload testing at ambient and elevated temperatures. The data fit to the strength versus In (stress rate) relation was found to be very reasonable for most of the materials. It was also found that preloading technique was equally applicable for the case of slow crack growth (SCG) parameter n > 30. The major limitation in the exponential crack velocity formulation, however, was that an inert strength of a material must be known priori to evaluate the important SCG parameter n, a significant drawback as compared to the conventional power-law crack velocity formulation.
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
Basis convergence of range-separated density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franck, Odile, E-mail: odile.franck@etu.upmc.fr; Mussard, Bastien, E-mail: bastien.mussard@upmc.fr; CNRS, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris
2015-02-21
Range-separated density-functional theory (DFT) is an alternative approach to Kohn-Sham density-functional theory. The strategy of range-separated density-functional theory consists in separating the Coulomb electron-electron interaction into long-range and short-range components and treating the long-range part by an explicit many-body wave-function method and the short-range part by a density-functional approximation. Among the advantages of using many-body methods for the long-range part of the electron-electron interaction is that they are much less sensitive to the one-electron atomic basis compared to the case of the standard Coulomb interaction. Here, we provide a detailed study of the basis convergence of range-separated density-functional theory. Wemore » study the convergence of the partial-wave expansion of the long-range wave function near the electron-electron coalescence. We show that the rate of convergence is exponential with respect to the maximal angular momentum L for the long-range wave function, whereas it is polynomial for the case of the Coulomb interaction. We also study the convergence of the long-range second-order Møller-Plesset correlation energy of four systems (He, Ne, N{sub 2}, and H{sub 2}O) with cardinal number X of the Dunning basis sets cc − p(C)V XZ and find that the error in the correlation energy is best fitted by an exponential in X. This leads us to propose a three-point complete-basis-set extrapolation scheme for range-separated density-functional theory based on an exponential formula.« less
NASA Technical Reports Server (NTRS)
Choi, Sung R.; Nemeth, Noel N.; Gyekenyesi, John P.
2002-01-01
The previously determined life prediction analysis based on an exponential crack-velocity formulation was examined using a variety of experimental data on advanced structural ceramics tested under constant stress and cyclic stress loading at ambient and elevated temperatures. The data fit to the relation between the time to failure and applied stress (or maximum applied stress in cyclic loading) was very reasonable for most of the materials studied. It was also found that life prediction for cyclic stress loading from data of constant stress loading in the exponential formulation was in good agreement with the experimental data, resulting in a similar degree of accuracy as compared with the power-law formulation. The major limitation in the exponential crack-velocity formulation, however, was that the inert strength of a material must be known a priori to evaluate the important slow-crack-growth (SCG) parameter n, a significant drawback as compared with the conventional power-law crack-velocity formulation.
Spatial analysis of soil organic carbon in Zhifanggou catchment of the Loess Plateau.
Li, Mingming; Zhang, Xingchang; Zhen, Qing; Han, Fengpeng
2013-01-01
Soil organic carbon (SOC) reflects soil quality and plays a critical role in soil protection, food safety, and global climate changes. This study involved grid sampling at different depths (6 layers) between 0 and 100 cm in a catchment. A total of 1282 soil samples were collected from 215 plots over 8.27 km(2). A combination of conventional analytical methods and geostatistical methods were used to analyze the data for spatial variability and soil carbon content patterns. The mean SOC content in the 1282 samples from the study field was 3.08 g · kg(-1). The SOC content of each layer decreased with increasing soil depth by a power function relationship. The SOC content of each layer was moderately variable and followed a lognormal distribution. The semi-variograms of the SOC contents of the six different layers were fit with the following models: exponential, spherical, exponential, Gaussian, exponential, and exponential, respectively. A moderate spatial dependence was observed in the 0-10 and 10-20 cm layers, which resulted from stochastic and structural factors. The spatial distribution of SOC content in the four layers between 20 and 100 cm exhibit were mainly restricted by structural factors. Correlations within each layer were observed between 234 and 562 m. A classical Kriging interpolation was used to directly visualize the spatial distribution of SOC in the catchment. The variability in spatial distribution was related to topography, land use type, and human activity. Finally, the vertical distribution of SOC decreased. Our results suggest that the ordinary Kriging interpolation can directly reveal the spatial distribution of SOC and the sample distance about this study is sufficient for interpolation or plotting. More research is needed, however, to clarify the spatial variability on the bigger scale and better understand the factors controlling spatial variability of soil carbon in the Loess Plateau region.
Deng, Jie; Fishbein, Mark H; Rigsby, Cynthia K; Zhang, Gang; Schoeneman, Samantha E; Donaldson, James S
2014-11-01
Non-alcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease in children. The gold standard for diagnosis is liver biopsy. MRI is a non-invasive imaging method to provide quantitative measurement of hepatic fat content. The methodology is particularly appealing for the pediatric population because of its rapidity and radiation-free imaging techniques. To develop a multi-point Dixon MRI method with multi-interference models (multi-fat-peak modeling and bi-exponential T2* correction) for accurate hepatic fat fraction (FF) and T2* measurements in pediatric patients with NAFLD. A phantom study was first performed to validate the accuracy of the MRI fat fraction measurement by comparing it with the chemical fat composition of the ex-vivo pork liver-fat homogenate. The most accurate model determined from the phantom study was used for fat fraction and T2* measurements in 52 children and young adults referred from the pediatric hepatology clinic with suspected or identified NAFLD. Separate T2* values of water (T2*W) and fat (T2*F) components derived from the bi-exponential fitting were evaluated and plotted as a function of fat fraction. In ten patients undergoing liver biopsy, we compared histological analysis of liver fat fraction with MRI fat fraction. In the phantom study the 6-point Dixon with 5-fat-peak, bi-exponential T2* modeling demonstrated the best precision and accuracy in fat fraction measurements compared with other methods. This model was further calibrated with chemical fat fraction and applied in patients, where similar patterns were observed as in the phantom study that conventional 2-point and 3-point Dixon methods underestimated fat fraction compared to the calibrated 6-point 5-fat-peak bi-exponential model (P < 0.0001). With increasing fat fraction, T2*W (27.9 ± 3.5 ms) decreased, whereas T2*F (20.3 ± 5.5 ms) increased; and T2*W and T2*F became increasingly more similar when fat fraction was higher than 15-20%. Histological fat fraction measurements in ten patients were highly correlated with calibrated MRI fat fraction measurements (Pearson correlation coefficient r = 0.90 with P = 0.0004). Liver MRI using multi-point Dixon with multi-fat-peak and bi-exponential T2* modeling provided accurate fat quantification in children and young adults with non-alcoholic fatty liver disease and may be used to screen at-risk or affected individuals and to monitor disease progress noninvasively.
[Comparison among three translucency parameters].
Fang, Xiong; Hui, Xia
2017-06-01
This study aims to compare the three commonly used translucency parameters in prosthodontics: transmittance (T), contrast ratio (CR), and translucency parameter (TP). Six platelet specimens were composed of Vita enamel and dental porcelain. The initial thickness was 1.2 mm. The specimens were gradually ground to 1.0, 0.8, 0.6, 0.4, and 0.2 mm. T, color parameters, and reflection were measured by a spectrocolorimeter for each corresponding thickness. T, CR and TP were calculated and compared. TP increased, whereas CR decreased, with decreasing thickness. Moreover, T increased with decreasing thickness, and exponential relationships were found. Two-way ANOVA showed statistical significance between T and thickness, except between T and the 1.2 mm and 1.0 mm enamel porcelain groups. No difference was found among the coefficient variations (CV) of T, CR and TP. Curve fitting indicated the existence of exponential relationships between T and CR and between T and TP. The values for goodness of fit with statistical significance were 0.951 and 0.939, respectively (P<0.05). Under the experimental conditions, T, TP and CR achieved the same CV. T and TP, as well as T and CR, were found with exponential relationships. The value of CR and TP could not represent the translucency precisely, especially when comparing the changing ratios.
Long-term radio and X-ray evolution of the tidal disruption event ASASSN-14li
NASA Astrophysics Data System (ADS)
Bright, J. S.; Fender, R. P.; Motta, S. E.; Mooley, K.; Perrott, Y. C.; van Velzen, S.; Carey, S.; Hickish, J.; Razavi-Ghods, N.; Titterington, D.; Scott, P.; Grainge, K.; Scaife, A.; Cantwell, T.; Rumsey, C.
2018-04-01
We report on late time radio and X-ray observations of the tidal disruption event candidate ASASSN-14li, covering the first 1000 d of the decay phase. For the first ˜200 d the radio and X-ray emission fade in concert. This phase is better fitted by an exponential decay at X-ray wavelengths, while the radio emission is well described by either an exponential or the canonical t-5/3 decay assumed for tidal disruption events. The correlation between radio and X-ray emission during this period can be fitted as L_R∝ L_X^{1.9± 0.2}. After 400 d the radio emission at 15.5 GHz has reached a plateau level of 244 ± 8 μJy which it maintains for at least the next 600 d, while the X-ray emission continues to fade exponentially. This steady level of radio emission is likely due to relic radio lobes from the weak AGN-like activity implied by historical radio observations. We note that while most existing models are based upon the evolution of ejecta which are decoupled from the central black hole, the radio-X-ray correlation during the declining phase is also consistent with core-jet emission coupled to a radiatively efficient accretion flow.
From Experiment to Theory: What Can We Learn from Growth Curves?
Kareva, Irina; Karev, Georgy
2018-01-01
Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778
Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka
2016-01-01
Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.
Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea
2015-08-01
Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.
Trajectory prediction of saccadic eye movements using a compressed exponential model
Han, Peng; Saunders, Daniel R.; Woods, Russell L.; Luo, Gang
2013-01-01
Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R2 > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays. PMID:23902753
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Medich, David Christopher
1997-09-01
The biokinetics of Iodophenylpentadecanoic acid (123I-IPPA) during a chronic period of myocardial infarction were determined and compared to 201Tl. IPPA was assessed as a perfusion and metabolic tracer in the scintigraphic diagnosis of coronary artery disease. The myocardial clearance kinetics were measured by placing a series of thermoluminescent dosimeters (TLDs) on normal and infarcted tissue to measure the local myocardial activity content over time. The arterial blood pool activity was fit to a bi-exponential function for 201Tl and a tri-exponential function for 123I-IPPA to estimate the left ventricle contribution to TLD response. At equilibrium, the blood pool contribution was estimated experimentally to be less than 5% of the total TLD response. The method was unable to resolve the initial uptake of the imaging agent due in part to the 2 minute TLD response integration time and in part to the 30 second lag time for the first TLD placement. A noticeable disparity was observed between the tracer concentrations of IPPA in normal and ischemic tissue of approximately 2:1. The fitting parameters (representing the biokinetic eigenvalue rate constants) were related to the fundamental rate constants of a recycling biokinetic model. The myocardial IPPA content within normal tissue was elevated after approximately 130 minutes post injection. This phenomenon was observed in all but one (950215) of the IPPA TLD kinetics curves.
Plasma Heating in Solar Microflares: Statistics and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirichenko, A. S.; Bogachev, S. A.
2017-05-01
In this paper we present the results of an analysis of 481 weak solar flares, from A0.01 class flares to the B GOES class, that were observed during the period of extremely low solar activity from 2009 April to July. For all flares we measured the temperature of the plasma in the isothermal and two-temperature approximations and tried to fit its relationship with the X-ray class using exponential and power-law functions. We found that the whole temperature distribution in the range from A0.01 to X-class cannot be fit by one exponential function. The fitting for weak flares below A1.0 ismore » significantly steeper than that for medium and large flares. The power-law approximation seems to be more reliable: the corresponding functions were found to be in good agreement with experimental data both for microflares and for normal flares. Our study predicts that evidence of plasma heating can be found in flares starting from the A0.0002 X-ray class. Weaker events presumably cannot heat the surrounding plasma. We also estimated emission measures for all flares studied and the thermal energy for 113 events.« less
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
Vibrational energies for HFCO using a neural network sum of exponentials potential energy surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pradhan, Ekadashi; Brown, Alex, E-mail: alex.brown@ualberta.ca
2016-05-07
A six-dimensional potential energy surface (PES) for formyl fluoride (HFCO) is fit in a sum-of-products form using neural network exponential fitting functions. The ab initio data upon which the fit is based were computed at the explicitly correlated coupled cluster with single, double, and perturbative triple excitations [CCSD(T)-F12]/cc-pVTZ-F12 level of theory. The PES fit is accurate (RMSE = 10 cm{sup −1}) up to 10 000 cm{sup −1} above the zero point energy and covers most of the experimentally measured IR data. The PES is validated by computing vibrational energies for both HFCO and deuterated formyl fluoride (DFCO) using block improved relaxationmore » with the multi-configuration time dependent Hartree approach. The frequencies of the fundamental modes, and all other vibrational states up to 5000 cm{sup −1} above the zero-point energy, are more accurate than those obtained from the previous MP2-based PES. The vibrational frequencies obtained on the PES are compared to anharmonic frequencies at the MP2/aug-cc-pVTZ and CCSD(T)/aug-cc-pVTZ levels of theory obtained using second-order vibrational perturbation theory. The new PES will be useful for quantum dynamics simulations for both HFCO and DFCO, e.g., studies of intramolecular vibrational redistribution leading to unimolecular dissociation and its laser control.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werhahn, Jasper C.; Akase, Dai; Xantheas, Sotiris S.
2014-08-14
The scaled versions of the newly introduced [S. S. Xantheas and J. C. Werhahn, J. Chem. Phys.141, 064117 (2014)] generalized forms of some popular potential energy functions (PEFs) describing intermolecular interactions – Mie, Lennard-Jones, Morse, and Buckingham exponential-6 – have been used to fit the ab initio relaxed approach paths and fixed approach paths for the halide-water, X -(H 2O), X = F, Cl, Br, I, and alkali metal-water, M +(H 2O), M = Li, Na, K, Rb, Cs, interactions. The generalized forms of those PEFs have an additional parameter with respect to the original forms and produce fits tomore » the ab initio data that are between one and two orders of magnitude better in the χ 2 than the original PEFs. They were found to describe both the long-range, minimum and repulsive wall of the respective potential energy surfaces quite accurately. Overall the 4-parameter extended Morse (eM) and generalized Buckingham exponential-6 (gBe-6) potentials were found to best fit the ab initio data for these two classes of ion-water interactions. Finally, the fitted values of the parameter of the (eM) and (gBe-6) PEFs that control the repulsive wall of the potential correlate remarkably well with the ionic radii of the halide and alkali metal ions.« less
Real-time interferometric diagnostics of rubidium plasma
NASA Astrophysics Data System (ADS)
Djotyan, G. P.; Bakos, J. S.; Kedves, M. Á.; Ráczkevi, B.; Dzsotjan, D.; Varga-Umbrich, K.; Sörlei, Zs.; Szigeti, J.; Ignácz, P.; Lévai, P.; Czitrovszky, A.; Nagy, A.; Dombi, P.; Rácz, P.
2018-03-01
A method of interferometric real-time diagnostics is developed and applied to rubidium plasma created by strong laser pulses in the femtosecond duration range at different initial rubidium vapor densities using a Michelson-type interferometer. A cosine fit with an exponentially decaying relative phase is applied to the obtained time-dependent interferometry signals to measure the density-length product of the created plasma and its recombination time constant. The presented technique may be applicable for real-time measurements of rubidium plasma dynamics in the AWAKE experiment at CERN, as well as for real-time diagnostics of plasmas created in different gaseous media and on surfaces of solid targets.
Ye, Jun
2016-01-01
An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.
Estimating time since infection in early homogeneous HIV-1 samples using a poisson model
2010-01-01
Background The occurrence of a genetic bottleneck in HIV sexual or mother-to-infant transmission has been well documented. This results in a majority of new infections being homogeneous, i.e., initiated by a single genetic strain. Early after infection, prior to the onset of the host immune response, the viral population grows exponentially. In this simple setting, an approach for estimating evolutionary and demographic parameters based on comparison of diversity measures is a feasible alternative to the existing Bayesian methods (e.g., BEAST), which are instead based on the simulation of genealogies. Results We have devised a web tool that analyzes genetic diversity in acutely infected HIV-1 patients by comparing it to a model of neutral growth. More specifically, we consider a homogeneous infection (i.e., initiated by a unique genetic strain) prior to the onset of host-induced selection, where we can assume a random accumulation of mutations. Previously, we have shown that such a model successfully describes about 80% of sexual HIV-1 transmissions provided the samples are drawn early enough in the infection. Violation of the model is an indicator of either heterogeneous infections or the initiation of selection. Conclusions When the underlying assumptions of our model (homogeneous infection prior to selection and fast exponential growth) are met, we are under a very particular scenario for which we can use a forward approach (instead of backwards in time as provided by coalescent methods). This allows for more computationally efficient methods to derive the time since the most recent common ancestor. Furthermore, the tool performs statistical tests on the Hamming distance frequency distribution, and outputs summary statistics (mean of the best fitting Poisson distribution, goodness of fit p-value, etc). The tool runs within minutes and can readily accommodate the tens of thousands of sequences generated through new ultradeep pyrosequencing technologies. The tool is available on the LANL website. PMID:20973976
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics
NASA Astrophysics Data System (ADS)
Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.
2017-11-01
We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Mizuno, Ju; Mohri, Satoshi; Yokoyama, Takeshi; Otsuji, Mikiya; Arita, Hideko; Hanaoka, Kazuo
2017-02-01
Varying temperature affects cardiac systolic and diastolic function and the left ventricular (LV) pressure-time curve (PTC) waveform that includes information about LV inotropism and lusitropism. Our proposed half-logistic (h-L) time constants obtained by fitting using h-L functions for four segmental phases (Phases I-IV) in the isovolumic LV PTC are more useful indices for estimating LV inotropism and lusitropism during contraction and relaxation periods than the mono-exponential (m-E) time constants at normal temperature. In this study, we investigated whether the superiority of the goodness of h-L fits remained even at hypothermia and hyperthermia. Phases I-IV in the isovolumic LV PTCs in eight excised, cross-circulated canine hearts at 33, 36, and 38 °C were analyzed using h-L and m-E functions and the least-squares method. The h-L and m-E time constants for Phases I-IV significantly shortened with increasing temperature. Curve fitting using h-L functions was significantly better than that using m-E functions for Phases I-IV at all temperatures. Therefore, the superiority of the goodness of h-L fit vs. m-E fit remained at all temperatures. As LV inotropic and lusitropic indices, temperature-dependent h-L time constants could be more useful than m-E time constants for Phases I-IV.
Gasification Characteristics and Kinetics of Coke with Chlorine Addition
NASA Astrophysics Data System (ADS)
Wang, Cui; Zhang, Jianliang; Jiao, Kexin; Liu, Zhengjian; Chou, Kuochih
2017-10-01
The gasification process of metallurgical coke with 0, 1.122, 3.190, and 7.132 wt pct chlorine was investigated through thermogravimetric method from ambient temperature to 1593 K (1320 °C) in purified CO2 atmosphere. The variations in the temperature parameters that T i decreases gradually with increasing chlorine, T f and T max first decrease and then increase, but both in a downward trend indicated that the coke gasification process was catalyzed by the chlorine addition. Then the kinetic model of the chlorine-containing coke gasification was obtained through the advanced determination of the average apparent activation energy, the optimal reaction model, and the pre-exponential factor. The average apparent activation energies were 182.962, 118.525, 139.632, and 111.953 kJ/mol, respectively, which were in the same decreasing trend with the temperature parameters analyzed by the thermogravimetric method. It was also demonstrated that the coke gasification process was catalyzed by chlorine. The optimal kinetic model to describe the gasification process of chlorine-containing coke was the Šesták Berggren model using Málek's method, and the pre-exponential factors were 6.688 × 105, 2.786 × 103, 1.782 × 104, and 1.324 × 103 min-1, respectively. The predictions of chlorine-containing coke gasification from the Šesták Berggren model were well fitted with the experimental data.
[The analysis of sinusoidal modulated method used for measuring fluorescence lifetime].
Feng, Ying; Huang, Shi-hua
2007-12-01
This paper has built a system with a sinusoidal modulated LED as the excitation source. Such exciter was used upon the sample Eu2 L'3 x nH2O (L' = C4H4O4). Both the excitation light and the 5Do-7F2 emission of Eu3+ ion were measured. Fluorescence lifetime, which approximate to 0.680 ms, can then be obtained from the measured excitation and fluorescence waveforms by non-linear least square curve fitting based on the principle of phase-shift measurement of fluorescence lifetime. Data processing methods considering respectively the high order harmonics in the modulation and multi-exponential decay of the fluorescence were discussed. A method of utilizing Fourier series expandedness to amendatory the result was put forward. Accordingly, the applicability for phase-shift method was expanded as well as a more exact result was acquired.
Exponential integrators in time-dependent density-functional calculations
NASA Astrophysics Data System (ADS)
Kidd, Daniel; Covington, Cody; Varga, Kálmán
2017-12-01
The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.
Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir
The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.
An improved cyan fluorescent protein variant useful for FRET.
Rizzo, Mark A; Springer, Gerald H; Granada, Butch; Piston, David W
2004-04-01
Many genetically encoded biosensors use Förster resonance energy transfer (FRET) between fluorescent proteins to report biochemical phenomena in living cells. Most commonly, the enhanced cyan fluorescent protein (ECFP) is used as the donor fluorophore, coupled with one of several yellow fluorescent protein (YFP) variants as the acceptor. ECFP is used despite several spectroscopic disadvantages, namely a low quantum yield, a low extinction coefficient and a fluorescence lifetime that is best fit by a double exponential. To improve the characteristics of ECFP for FRET measurements, we used a site-directed mutagenesis approach to overcome these disadvantages. The resulting variant, which we named Cerulean (ECFP/S72A/Y145A/H148D), has a greatly improved quantum yield, a higher extinction coefficient and a fluorescence lifetime that is best fit by a single exponential. Cerulean is 2.5-fold brighter than ECFP and replacement of ECFP with Cerulean substantially improves the signal-to-noise ratio of a FRET-based sensor for glucokinase activation.
The topology of large Open Connectome networks for the human brain.
Gastner, Michael T; Ódor, Géza
2016-06-07
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
The topology of large Open Connectome networks for the human brain
NASA Astrophysics Data System (ADS)
Gastner, Michael T.; Ódor, Géza
2016-06-01
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
The acquisition of conditioned responding.
Harris, Justin A
2011-04-01
This report analyzes the acquisition of conditioned responses in rats trained in a magazine approach paradigm. Following the suggestion by Gallistel, Fairhurst, and Balsam (2004), Weibull functions were fitted to the trial-by-trial response rates of individual rats. These showed that the emergence of responding was often delayed, after which the response rate would increase relatively gradually across trials. The fit of the Weibull function to the behavioral data of each rat was equaled by that of a cumulative exponential function incorporating a response threshold. Thus, the growth in conditioning strength on each trial can be modeled by the derivative of the exponential--a difference term of the form used in many models of associative learning (e.g., Rescorla & Wagner, 1972). Further analyses, comparing the acquisition of responding with a continuously reinforced stimulus (CRf) and a partially reinforced stimulus (PRf), provided further evidence in support of the difference term. In conclusion, the results are consistent with conventional models that describe learning as the growth of associative strength, incremented on each trial by an error-correction process.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
Coronal loop seismology using damping of standing kink oscillations by mode coupling
NASA Astrophysics Data System (ADS)
Pascoe, D. J.; Goddard, C. R.; Nisticò, G.; Anfinogentov, S.; Nakariakov, V. M.
2016-05-01
Context. Kink oscillations of solar coronal loops are frequently observed to be strongly damped. The damping can be explained by mode coupling on the condition that loops have a finite inhomogeneous layer between the higher density core and lower density background. The damping rate depends on the loop density contrast ratio and inhomogeneous layer width. Aims: The theoretical description for mode coupling of kink waves has been extended to include the initial Gaussian damping regime in addition to the exponential asymptotic state. Observation of these damping regimes would provide information about the structuring of the coronal loop and so provide a seismological tool. Methods: We consider three examples of standing kink oscillations observed by the Atmospheric Imaging Assembly (AIA) of the Solar Dynamics Observatory (SDO) for which the general damping profile (Gaussian and exponential regimes) can be fitted. Determining the Gaussian and exponential damping times allows us to perform seismological inversions for the loop density contrast ratio and the inhomogeneous layer width normalised to the loop radius. The layer width and loop minor radius are found separately by comparing the observed loop intensity profile with forward modelling based on our seismological results. Results: The seismological method which allows the density contrast ratio and inhomogeneous layer width to be simultaneously determined from the kink mode damping profile has been applied to observational data for the first time. This allows the internal and external Alfvén speeds to be calculated, and estimates for the magnetic field strength can be dramatically improved using the given plasma density. Conclusions: The kink mode damping rate can be used as a powerful diagnostic tool to determine the coronal loop density profile. This information can be used for further calculations such as the magnetic field strength or phase mixing rate.
Hargrove, James L; Heinz, Grete; Heinz, Otto
2008-01-01
Background This study evaluated whether the changes in several anthropometric and functional measures during caloric restriction combined with walking and treadmill exercise would fit a simple model of approach to steady state (a plateau) that can be solved using spreadsheet software (Microsoft Excel®). We hypothesized that transitions in waist girth and several body compartments would fit a simple exponential model that approaches a stable steady-state. Methods The model (an equation) was applied to outcomes reported in the Minnesota starvation experiment using Microsoft Excel's Solver® function to derive rate parameters (k) and projected steady state values. However, data for most end-points were available only at t = 0, 12 and 24 weeks of caloric restriction. Therefore, we derived 2 new equations that enable model solutions to be calculated from 3 equally spaced data points. Results For the group of male subjects in the Minnesota study, body mass declined with a first order rate constant of about 0.079 wk-1. The fractional rate of loss of fat free mass, which includes components that remained almost constant during starvation, was 0.064 wk-1, compared to a rate of loss of fat mass of 0.103 wk-1. The rate of loss of abdominal fat, as exemplified by the change in the waist girth, was 0.213 wk-1. On average, 0.77 kg was lost per cm of waist girth. Other girths showed rates of loss between 0.085 and 0.131 wk-1. Resting energy expenditure (REE) declined at 0.131 wk-1. Changes in heart volume, hand strength, work capacity and N excretion showed rates of loss in the same range. The group of 32 subjects was close to steady state or had already reached steady state for the variables under consideration at the end of semi-starvation. Conclusion When energy intake is changed to new, relatively constant levels, while physical activity is maintained, changes in several anthropometric and physiological measures can be modeled as an exponential approach to steady state using software that is widely available. The 3 point method for parameter estimation provides a criterion for testing whether change in a variable can be usefully modelled with exponential kinetics within the time range for which data are available. PMID:18840293
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werhahn, Jasper C.; Miliordos, Evangelos; Xantheas, Sotiris S.
2015-01-05
We introduce new generalized (reverting to the original) and extended (not reverting to the original) 4-parameter forms of the (B-2) Potential Energy Function (PEF) of Wang etal. (L.-P. Wang, J. Chen and T. van Voorhis, J. Chem. Theor. Comp. 9, 452 (2013)), which is itself a modification of the Buckingham exponential-6 PEF. The new forms have a tunable, singularity-free short-range repulsion and an adjustable long-range attraction. They produce fits to high quality ab initio data for the X–(H2O), X=F, Cl, Br, I and M+(H2O), M=Li, Na, K, Rb, Cs dimers that are between 1 and 2 orders of magnitude bettermore » than the original 3-parameter (B-2) and modified Buckingham exponential-6 PEFs. They are also slightly better than the 4-parameter generalized Buckingham exponential-6(gBe-6) and of comparable quality with the 4-parameter extended Morse (eM) PEFs introduced recently by us.« less
Extended q -Gaussian and q -exponential distributions from gamma random variables
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Year-round measurements of CH4 exchange in a forested drained peatland using automated chambers
NASA Astrophysics Data System (ADS)
Korkiakoski, Mika; Koskinen, Markku; Penttilä, Timo; Arffman, Pentti; Ojanen, Paavo; Minkkinen, Kari; Laurila, Tuomas; Lohila, Annalea
2016-04-01
Pristine peatlands are usually carbon accumulating ecosystems and sources of methane (CH4). Draining peatlands for forestry increases the thickness of the oxic layer, thus enhancing CH4 oxidation which leads to decreased CH4 emissions. Closed chambers are commonly used in estimating the greenhouse gas exchange between the soil and the atmosphere. However, the closed chamber technique alters the gas concentration gradient making the concentration development against time non-linear. Selecting the correct fitting method is important as it can be the largest source of uncertainty in flux calculation. We measured CH4 exchange rates and their diurnal and seasonal variations in a nutrient-rich drained peatland located in southern Finland. The original fen was drained for forestry in 1970s and now the tree stand is a mixture of Scots pine, Norway spruce and Downy birch. Our system consisted of six transparent polycarbonate chambers and stainless steel frames, positioned on different types of field and moss layer. During winter, the frame was raised above the snowpack with extension collars and the height of the snowpack inside the chamber was measured regularly. The chambers were closed hourly and the sample gas was sucked into a cavity ring-down spectrometer and analysed for CH4, CO2 and H2O concentration with 5 second time resolution. The concentration change in time in the beginning of a closure was determined with linear and exponential fits. The results show that linear regression systematically underestimated the CH4 flux when compared to exponential regression by 20-50 %. On the other hand, the exponential regression seemed not to work reliably with small fluxes (< 3.5 μg CH4 m-2 h-1): using exponential regression in such cases typically resulted in anomalously large fluxes and high deviation. Due to these facts, we recommend first calculating the flux with the linear regression and, if the flux is high enough, calculate the flux again using the exponential regression and use this value in later analysis. The forest floor at the site (including the ground vegetation) acted as a CH4 sink most of the time. CH4 emission peaks were occasionally observed, particularly in spring during the snow melt, and during rainfall events in summer. Diurnal variation was observed mainly in summer. The net CH4 exchange for the two year measurement period in the six chambers varied from -31 to -155 mg CH4 m-2 yr-1, the average being -67 mg CH4 m-2 yr-1. However, this does not include the ditches which typically act as a significant source for CH4.
Properties of single NMDA receptor channels in human dentate gyrus granule cells
Lieberman, David N; Mody, Istvan
1999-01-01
Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Cole-Davidson dynamics of simple chain models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dotson, Taylor C.; McCoy, John Dwane; Adolf, Douglas Brian
2008-10-01
Rotational relaxation functions of the end-to-end vector of short, freely jointed and freely rotating chains were determined from molecular dynamics simulations. The associated response functions were obtained from the one-sided Fourier transform of the relaxation functions. The Cole-Davidson function was used to fit the response functions with extensive use being made of Cole-Cole plots in the fitting procedure. For the systems studied, the Cole-Davidson function provided remarkably accurate fits [as compared to the transform of the Kohlrausch-Williams-Watts (KWW) function]. The only appreciable deviations from the simulation results were in the high frequency limit and were due to ballistic or freemore » rotation effects. The accuracy of the Cole-Davidson function appears to be the result of the transition in the time domain from stretched exponential behavior at intermediate time to single exponential behavior at long time. Such a transition can be explained in terms of a distribution of relaxation times with a well-defined longest relaxation time. Since the Cole-Davidson distribution has a sharp cutoff in relaxation time (while the KWW function does not), it makes sense that the Cole-Davidson would provide a better frequency-domain description of the associated response function than the KWW function does.« less
NASA Astrophysics Data System (ADS)
Yang, Jiefan; Lei, Hengchi
2016-02-01
Cloud microphysical properties of a mixed phase cloud generated by a typical extratropical cyclone in the Tongliao area, Inner Mongolia on 3 May 2014, are analyzed primarily using in situ flight observation data. This study is mainly focused on ice crystal concentration, supercooled cloud water content, and vertical distributions of fit parameters of snow particle size distributions (PSDs). The results showed several discrepancies of microphysical properties obtained during two penetrations. During penetration within precipitating cloud, the maximum ice particle concentration, liquid water content, and ice water content were increased by a factor of 2-3 compared with their counterpart obtained during penetration of a nonprecipitating cloud. The heavy rimed and irregular ice crystals obtained by 2D imagery probe as well as vertical distributions of fitting parameters within precipitating cloud show that the ice particles grow during falling via riming and aggregation process, whereas the lightly rimed and pristine ice particles as well as fitting parameters within non-precipitating cloud indicate the domination of sublimation process. During the two cloud penetrations, the PSDs were generally better represented by gamma distributions than the exponential form in terms of the determining coefficient ( R 2). The correlations between parameters of exponential /gamma form within two penetrations showed no obvious differences compared with previous studies.
Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J
2013-10-01
The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Beyond the power law: Uncovering stylized facts in interbank networks
NASA Astrophysics Data System (ADS)
Vandermarliere, Benjamin; Karas, Alexei; Ryckebusch, Jan; Schoors, Koen
2015-06-01
We use daily data on bilateral interbank exposures and monthly bank balance sheets to study network characteristics of the Russian interbank market over August 1998-October 2004. Specifically, we examine the distributions of (un)directed (un)weighted degree, nodal attributes (bank assets, capital and capital-to-assets ratio) and edge weights (loan size and counterparty exposure). We search for the theoretical distribution that fits the data best and report the "best" fit parameters. We observe that all studied distributions are heavy tailed. The fat tail typically contains 20% of the data and can be mostly described well by a truncated power law. Also the power law, stretched exponential and log-normal provide reasonably good fits to the tails of the data. In most cases, however, separating the bulk and tail parts of the data is hard, so we proceed to study the full range of the events. We find that the stretched exponential and the log-normal distributions fit the full range of the data best. These conclusions are robust to (1) whether we aggregate the data over a week, month, quarter or year; (2) whether we look at the "growth" versus "maturity" phases of interbank market development; and (3) with minor exceptions, whether we look at the "normal" versus "crisis" operation periods. In line with prior research, we find that the network topology changes greatly as the interbank market moves from a "normal" to a "crisis" operation period.
Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris
2015-09-01
A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Richter, Philipp; Toledano-Ayala, Manuel
2015-01-01
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996
Method and Apparatus for Evaluating Multilayer Objects for Imperfections
NASA Technical Reports Server (NTRS)
Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)
1999-01-01
A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First. an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave. wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve. wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured. wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.
Method and apparatus for evaluating multilayer objects for imperfections
NASA Technical Reports Server (NTRS)
Heyman, Joseph S. (Inventor); Abedin, Nurul (Inventor); Sun, Kuen J. (Inventor)
1997-01-01
A multilayer object having multiple layers arranged in a stacking direction is evaluated for imperfections such as voids, delaminations and microcracks. First, an acoustic wave is transmitted into the object in the stacking direction via an appropriate transducer/waveguide combination. The wave propagates through the multilayer object and is received by another transducer/waveguide combination preferably located on the same surface as the transmitting combination. The received acoustic wave is correlated with the presence or absence of imperfections by, e.g., generating pulse echo signals indicative of the received acoustic wave, wherein the successive signals form distinct groups over time. The respective peak amplitudes of each group are sampled and curve fit to an exponential curve, wherein a substantial fit of approximately 80-90% indicates an absence of imperfections and a significant deviation indicates the presence of imperfections. Alternatively, the time interval between distinct groups can be measured, wherein equal intervals indicate the absence of imperfections and unequal intervals indicate the presence of imperfections.
Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry
2015-01-01
We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701
Determining XV-15 aeroelastic modes from flight data with frequency-domain methods
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.; Tischler, Mark B.
1993-01-01
The XV-15 tilt-rotor wing has six major aeroelastic modes that are close in frequency. To precisely excite individual modes during flight test, dual flaperon exciters with automatic frequency-sweep controls were installed. The resulting structural data were analyzed in the frequency domain (Fourier transformed). All spectral data were computed using chirp z-transforms. Modal frequencies and damping were determined by fitting curves to frequency-response magnitude and phase data. The results given in this report are for the XV-15 with its original metal rotor blades. Also, frequency and damping values are compared with theoretical predictions made using two different programs, CAMRAD and ASAP. The frequency-domain data-analysis method proved to be very reliable and adequate for tracking aeroelastic modes during flight-envelope expansion. This approach required less flight-test time and yielded mode estimations that were more repeatable, compared with the exponential-decay method previously used.
Quantitative Study on Corrosion of Steel Strands Based on Self-Magnetic Flux Leakage.
Xia, Runchuan; Zhou, Jianting; Zhang, Hong; Liao, Leng; Zhao, Ruiqiang; Zhang, Zeyu
2018-05-02
This paper proposed a new computing method to quantitatively and non-destructively determine the corrosion of steel strands by analyzing the self-magnetic flux leakage (SMFL) signals from them. The magnetic dipole model and three growth models (Logistic model, Exponential model, and Linear model) were proposed to theoretically analyze the characteristic value of SMFL. Then, the experimental study on the corrosion detection by the magnetic sensor was carried out. The setup of the magnetic scanning device and signal collection method were also introduced. The results show that the Logistic Growth model is verified as the optimal model for calculating the magnetic field with good fitting effects. Combined with the experimental data analysis, the amplitudes of the calculated values ( B xL ( x,z ) curves) agree with the measured values in general. This method provides significant application prospects for the evaluation of the corrosion and the residual bearing capacity of steel strand.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
The Mass Distribution of Stellar-mass Black Holes
NASA Astrophysics Data System (ADS)
Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky
2011-11-01
We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacke, Peter; Spataru, Sergiu; Johnston, Steve
A progression of potential-induced degradation (PID) mechanisms are observed in CdTe modules, including shunting/junction degradation and two different manifestations of series resistance depending on the stress level and water ingress. The dark I-V method for in-situ characterization of Pmax based on superposition was adapted for the thin-film modules undergoing PID in view of the degradation mechanisms observed. An exponential model based on module temperature and relative humidity was fit to the PID rate for multiple stress levels in chamber tests and validated by predicting the observed degradation of the module type in the field.
Forgetting Curves: Implications for Connectionist Models
ERIC Educational Resources Information Center
Sikstrom, Sverker
2002-01-01
Forgetting in long-term memory, as measured in a recall or a recognition test, is faster for items encoded more recently than for items encoded earlier. Data on forgetting curves fit a power function well. In contrast, many connectionist models predict either exponential decay or completely flat forgetting curves. This paper suggests a…
MO-F-CAMPUS-I-05: Quantitative ADC Measurement of Esophageal Cancer Before and After Chemoradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, L; UT MD Anderson Cancer Center, Houston, TX; Son, JB
2015-06-15
Purpose: We investigated whether quantitative diffusion imaging can be used as an imaging biomarker for early prediction of treatment response of esophageal cancer. Methods: Eight patients with esophageal cancer underwent a baseline and an interim MRI studies during chemoradiation on a 3T whole body MRI scanner with an 8-channel torso phased array coil. Each MRI study contained two axial diffusion-weighted imaging (DWI) series with a conventional DWI sequence and a reduced field-of-view DWI sequence (FOCUS) of varying b-values. ADC maps with two b-values were computed from conventional DWI images using a mono-exponential model. For each of DWI sequences, separate ADCallmore » was computed by fitting the signal intensity of images with all the b-values to a single exponential model. For the FOCUS sequence, a bi-exponential model was used to extract perfusion and diffusion coefficients (ADCperf and ADCdiff) and their contributions to the signal decay. A board-certified radiologist contoured the tumor region and mean ADC values and standard deviations of tumor and muscle ROIs were recorded from different ADC maps. Results: Our results showed that (1) the magnitude of ADCs from the same ROIs by the different analysis methods can be substantially different. (2) For a given method, the change between the baseline and interim muscle ADCs was relatively small (≤10%). In contrast, the change between the baseline and interim tumor ADCs was substantially larger, with the change in ADCdiff by FOCUS DWI showing the largest percentage change of 73.2%. (3) The range of the relative change of a specific parameter for different patients was also different. Conclusion: Presently, we do not have the final pathological confirmation of the treatment response for all the patients. However, for a few patients whose surgical specimen is available, the quantitative ADC changes have been found to be useful as a potential predictor for treatment response.« less
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
DeMars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-01-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters. PMID:24324866
Demars, Craig A; Auger-Méthé, Marie; Schlägel, Ulrike E; Boutin, Stan
2013-10-01
Analyses of animal movement data have primarily focused on understanding patterns of space use and the behavioural processes driving them. Here, we analyzed animal movement data to infer components of individual fitness, specifically parturition and neonate survival. We predicted that parturition and neonate loss events could be identified by sudden and marked changes in female movement patterns. Using GPS radio-telemetry data from female woodland caribou (Rangifer tarandus caribou), we developed and tested two novel movement-based methods for inferring parturition and neonate survival. The first method estimated movement thresholds indicative of parturition and neonate loss from population-level data then applied these thresholds in a moving-window analysis on individual time-series data. The second method used an individual-based approach that discriminated among three a priori models representing the movement patterns of non-parturient females, females with surviving offspring, and females losing offspring. The models assumed that step lengths (the distance between successive GPS locations) were exponentially distributed and that abrupt changes in the scale parameter of the exponential distribution were indicative of parturition and offspring loss. Both methods predicted parturition with near certainty (>97% accuracy) and produced appropriate predictions of parturition dates. Prediction of neonate survival was affected by data quality for both methods; however, when using high quality data (i.e., with few missing GPS locations), the individual-based method performed better, predicting neonate survival status with an accuracy rate of 87%. Understanding ungulate population dynamics often requires estimates of parturition and neonate survival rates. With GPS radio-collars increasingly being used in research and management of ungulates, our movement-based methods represent a viable approach for estimating rates of both parameters.
[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
The exponential behavior and stabilizability of the stochastic magnetohydrodynamic equations
NASA Astrophysics Data System (ADS)
Wang, Huaqiao
2018-06-01
This paper studies the two-dimensional stochastic magnetohydrodynamic equations which are used to describe the turbulent flows in magnetohydrodynamics. The exponential behavior and the exponential mean square stability of the weak solutions are proved by the application of energy method. Furthermore, we establish the pathwise exponential stability by using the exponential mean square stability. When the stochastic perturbations satisfy certain additional hypotheses, we can also obtain pathwise exponential stability results without using the mean square stability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campione, Salvatore; Warne, Larry K.; Sainath, Kamalesh
In this report we overview the fundamental concepts for a pair of techniques which together greatly hasten computational predictions of electromagnetic pulse (EMP) excitation of finite-length dissipative conductors over a ground plane. In a time- domain, transmission line (TL) model implementation, predictions are computationally bottlenecked time-wise, either for late-time predictions (about 100ns-10000ns range) or predictions concerning EMP excitation of long TLs (order of kilometers or more ). This is because the method requires a temporal convolution to account for the losses in the ground. Addressing this to facilitate practical simulation of EMP excitation of TLs, we first apply a techniquemore » to extract an (approximate) complex exponential function basis-fit to the ground/Earth's impedance function, followed by incorporating this into a recursion-based convolution acceleration technique. Because the recursion-based method only requires the evaluation of the most recent voltage history data (versus the entire history in a "brute-force" convolution evaluation), we achieve necessary time speed- ups across a variety of TL/Earth geometry/material scenarios. Intentionally Left Blank« less
Concepción-Acevedo, Jeniffer; Weiss, Howard N; Chaudhry, Waqas Nasir; Levin, Bruce R
2015-01-01
The maximum exponential growth rate, the Malthusian parameter (MP), is commonly used as a measure of fitness in experimental studies of adaptive evolution and of the effects of antibiotic resistance and other genes on the fitness of planktonic microbes. Thanks to automated, multi-well optical density plate readers and computers, with little hands-on effort investigators can readily obtain hundreds of estimates of MPs in less than a day. Here we compare estimates of the relative fitness of antibiotic susceptible and resistant strains of E. coli, Pseudomonas aeruginosa and Staphylococcus aureus based on MP data obtained with automated multi-well plate readers with the results from pairwise competition experiments. This leads us to question the reliability of estimates of MP obtained with these high throughput devices and the utility of these estimates of the maximum growth rates to detect fitness differences.
Effects of random aspects of cutting tool wear on surface roughness and tool life
NASA Astrophysics Data System (ADS)
Nabil, Ben Fredj; Mabrouk, Mohamed
2006-10-01
The effects of random aspects of cutting tool flank wear on surface roughness and on tool lifetime, when turning the AISI 1045 carbon steel, were studied in this investigation. It was found that standard deviations corresponding to tool flank wear and to the surface roughness increase exponentially with cutting time. Under cutting conditions that correspond to finishing operations, no significant differences were found between the calculated values of the capability index C p at the steady-state region of the tool flank wear, using the best-fit method or the Box-Cox transformation, or by making the assumption that the surface roughness data are normally distributed. Hence, a method to establish cutting tool lifetime could be established that simultaneously respects the desired average of surface roughness and the required capability index.
Luminescence dating of quaternary deposits in geology in Brazil.
Tatumi, Sonia Hatsue; Gozzi, Giuliano; Yee, Márcio; de Oliveira, Victor Inácio; Sallun, Alethéa Ernandes Martins; Suguio, Kenitiro
2006-01-01
In the present work, systematic dating by luminescence methods has been done on 50 Quaternary geological samples within the study area of São Paulo State, Brazil. Bleaching experiments showed that residual TL intensity of 375 degrees C peak, of the quartz, was obtained after 10 h of sunlight exposition. Intensities decays of the 325 and 375 degrees C TL peaks can be fitted using second order exponential equation. Paleodose values were evaluated using regeneration methods with multiple aliquots. Samples dated indicate preliminary ages varying from 9 +/- 1 to 935 +/- 130 ka for colluvio-elluvial deposits, and from 17 +/- 2 to 215 +/- 30 ka for alluvial deposits of the study area. They cover four peneplained surfaces shaped during the Quaternary: I (1000-400 ka), II (400-120 ka), III (120-10 ka) and IV (10 ka until today), in decreasing order.
de Castro, Alberto; Ortiz, Sergio; Gambra, Enrique; Siedlecki, Damian; Marcos, Susana
2010-10-11
We present an optimization method to retrieve the gradient index (GRIN) distribution of the in-vitro crystalline lens from optical path difference data extracted from OCT images. Three-dimensional OCT images of the crystalline lens are obtained in two orientations (with the anterior surface up and posterior surface up), allowing to obtain the lens geometry. The GRIN reconstruction method is based on a genetic algorithm that searches for the parameters of a 4-variable GRIN model that best fits the distorted posterior surface of the lens. Computer simulations showed that, for noise of 5 μm in the surface elevations, the GRIN is recovered with an accuracy of 0.003 and 0.010 in the refractive indices of the nucleus and surface of the lens, respectively. The method was applied to retrieve three-dimensionally the GRIN of a porcine crystalline lens in vitro. We found a refractive index ranging from 1.362 in the surface to 1.443 in the nucleus of the lens, an axial exponential decay of the GRIN profile of 2.62 and a meridional exponential decay ranging from 3.56 to 5.18. The effect of GRIN on the aberrations of the lens also studied. The estimated spherical aberration of the measured porcine lens was 2.87 μm assuming a homogenous equivalent refractive index, and the presence of GRIN shifted the spherical aberration toward negative values (-0.97 μm), for a 6-mm pupil.
A method of examining the structure and topological properties of public-transport networks
NASA Astrophysics Data System (ADS)
Dimitrov, Stavri Dimitri; Ceder, Avishai (Avi)
2016-06-01
This work presents a new method of examining the structure of public-transport networks (PTNs) and analyzes their topological properties through a combination of computer programming, statistical data and large-network analyses. In order to automate the extraction, processing and exporting of data, a software program was developed allowing to extract the needed data from General Transit Feed Specification, thus overcoming difficulties occurring in accessing and collecting data. The proposed method was applied to a real-life PTN in Auckland, New Zealand, with the purpose of examining whether it showed characteristics of scale-free networks and exhibited features of ;small-world; networks. As a result, new regression equations were derived analytically describing observed, strong, non-linear relationships among the probabilities of randomly chosen stops in the PTN to be serviced by a given number of routes. The established dependence is best fitted by an exponential rather than a power-law function, showing that the PTN examined is neither random nor scale-free, but a mixture of the two. This finding explains the presence of hubs that are not typical of exponential networks and simultaneously not highly connected to the other nodes as is the case with scale-free networks. On the other hand, the observed values of the topological properties of the network show that although it is highly clustered, owing to its representation as a directed graph, it differs slightly from ;small-world; networks, which are characterized by strong clustering and a short average path length.
NASA Astrophysics Data System (ADS)
Deopa, Nisha; Rao, A. S.; Gupta, Mohini; Vijaya Prakash, G.
2018-01-01
Neodymium doped lithium lead alumino borate glasses were synthesized with the molar composition 10Li2Osbnd 10PbOsbnd (10-x) Al2O3sbnd 70B2O3sbnd x Nd2O3 (where, x = 0.1, 0.5, 1.0, 1.5, 2.0 and 2.5 mol %) via conventional melt quenching technique to understand their lasing potentialities using the absorption, emission and photoluminescence decay spectral measurements. The oscillator strengths measured from the absorption spectra were used to estimate the Judd-Ofelt intensity parameters using least square fitting procedure. The emission spectra recorded for the as-prepared glasses under investigation exhibit two emission transitions 4F3/2 → 4I11/2 (1063 nm) and 4F3/2 → 4I9/2 (1350 nm) for which radiative parameters have been evaluated. The emission intensity increases with increase in Nd3+ ion concentration up to 1 mol % and beyond concentration quenching took place. The decay profile shows single exponential nature for lower Nd3+ ion concentration and non-exponential for higher concentration. To elucidate the nature of energy transfer process, the non-exponential decay curves were well fitted to Inokuti-Hirayama model. The relatively higher values of emission cross-sections, branching ratios and quantum efficiency values obtained for 1.0 mol% of Nd3+ ions in LiPbAlB glass suggests it's aptness in generating lasing action at 1063 nm in NIR region.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
The stationary non-equilibrium plasma of cosmic-ray electrons and positrons
NASA Astrophysics Data System (ADS)
Tomaschitz, Roman
2016-06-01
The statistical properties of the two-component plasma of cosmic-ray electrons and positrons measured by the AMS-02 experiment on the International Space Station and the HESS array of imaging atmospheric Cherenkov telescopes are analyzed. Stationary non-equilibrium distributions defining the relativistic electron-positron plasma are derived semi-empirically by performing spectral fits to the flux data and reconstructing the spectral number densities of the electronic and positronic components in phase space. These distributions are relativistic power-law densities with exponential cutoff, admitting an extensive entropy variable and converging to the Maxwell-Boltzmann or Fermi-Dirac distributions in the non-relativistic limit. Cosmic-ray electrons and positrons constitute a classical (low-density high-temperature) plasma due to the low fugacity in the quantized partition function. The positron fraction is assembled from the flux densities inferred from least-squares fits to the electron and positron spectra and is subjected to test by comparing with the AMS-02 flux ratio measured in the GeV interval. The calculated positron fraction extends to TeV energies, predicting a broad spectral peak at about 1 TeV followed by exponential decay.
Constraining the red shifts of TeV BL Lac objects
NASA Astrophysics Data System (ADS)
Qin, Longhua; Wang, Jiancheng; Yan, Dahai; Yang, Chuyuan; Yuan, Zunli; Zhou, Ming
2018-01-01
We present a model-dependent method to estimate the red shifts of three TeV BL Lac objects (BL Lacs) through fitting their (quasi-)simultaneous multi-waveband spectral energy distributions (SEDs) with a one-zone leptonic synchrotron self-Compton model. Considering the impact of electron energy distributions (EEDs) on the results, we use three types of EEDs to fit the SEDs: a power-law EED with exponential cut-off (PLC), a log-parabola (PLLP) EED and the broken power-law (BPL) EED. We also use a parameter α to describe the uncertainties of the extragalactic background light models, as in Abdo et al. We then use a Markov chain Monte Carlo method to explore the multi-dimensional parameter space and obtain the uncertainties of the model parameters based on the observational data. We apply our method to obtain the red shifts of three TeV BL Lac objects in the marginalized 68 per cent confidence, and find that the PLC EED does not fit the SEDs. For 3C66A, the red shift is 0.14-0.31 and 0.16-0.32 in the BPL and PLLP EEDs. For PKS1424+240, the red shift is 0.55-0.68 and 0.55-0.67 in the BPL and PLLP EEDs. For PG1553+113, the red shift is 0.22-0.48 and 0.22-0.39 in the BPL and PLLP EEDs. We also estimate the red shift of PKS1424+240 in the high stage to be 0.46-0.67 in the PLLP EED, roughly consistent with that in the low stage.
NASA Astrophysics Data System (ADS)
Sanford, W. E.
2015-12-01
Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially better fit to the data than the one-parameter exponential function. For the single porosity system it was found that the use of three parameters was often optimal for accurately describing the base-flow age distribution, whereas for the dual porosity system the fourth parameter was often required to fit the more complicated response curves.
Nasserie, Tahmina; Tuite, Ashleigh R; Whitmore, Lindsay; Hatchette, Todd; Drews, Steven J; Peci, Adriana; Kwong, Jeffrey C; Friedman, Dara; Garber, Gary; Gubbay, Jonathan; Fisman, David N
2017-01-01
Seasonal influenza epidemics occur frequently. Rapid characterization of seasonal dynamics and forecasting of epidemic peaks and final sizes could help support real-time decision-making related to vaccination and other control measures. Real-time forecasting remains challenging. We used the previously described "incidence decay with exponential adjustment" (IDEA) model, a 2-parameter phenomenological model, to evaluate the characteristics of the 2015-2016 influenza season in 4 Canadian jurisdictions: the Provinces of Alberta, Nova Scotia and Ontario, and the City of Ottawa. Model fits were updated weekly with receipt of incident virologically confirmed case counts. Best-fit models were used to project seasonal influenza peaks and epidemic final sizes. The 2015-2016 influenza season was mild and late-peaking. Parameter estimates generated through fitting were consistent in the 2 largest jurisdictions (Ontario and Alberta) and with pooled data including Nova Scotia counts (R 0 approximately 1.4 for all fits). Lower R 0 estimates were generated in Nova Scotia and Ottawa. Final size projections that made use of complete time series were accurate to within 6% of true final sizes, but final size was using pre-peak data. Projections of epidemic peaks stabilized before the true epidemic peak, but these were persistently early (~2 weeks) relative to the true peak. A simple, 2-parameter influenza model provided reasonably accurate real-time projections of influenza seasonal dynamics in an atypically late, mild influenza season. Challenges are similar to those seen with more complex forecasting methodologies. Future work includes identification of seasonal characteristics associated with variability in model performance.
NASA Astrophysics Data System (ADS)
Cao, Bin; Liao, Ningfang; Li, Yasheng; Cheng, Haobo
2017-05-01
The use of spectral reflectance as fundamental color information finds application in diverse fields related to imaging. Many approaches use training sets to train the algorithm used for color classification. In this context, we note that the modification of training sets obviously impacts the accuracy of reflectance reconstruction based on classical reflectance reconstruction methods. Different modifying criteria are not always consistent with each other, since they have different emphases; spectral reflectance similarity focuses on the deviation of reconstructed reflectance, whereas colorimetric similarity emphasizes human perception. We present a method to improve the accuracy of the reconstructed spectral reflectance by adaptively combining colorimetric and spectral reflectance similarities. The different exponential factors of the weighting coefficients were investigated. The spectral reflectance reconstructed by the proposed method exhibits considerable improvements in terms of the root-mean-square error and goodness-of-fit coefficient of the spectral reflectance errors as well as color differences under different illuminants. Our method is applicable to diverse areas such as textiles, printing, art, and other industries.
Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism
NASA Astrophysics Data System (ADS)
Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.
2018-05-01
The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.
Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor
2005-12-01
Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.
NASA Astrophysics Data System (ADS)
Abdo, A. A.; Abeysekara, U.; Allen, B. T.; Aune, T.; Berley, D.; Bonamente, E.; Christopher, G. E.; DeYoung, T.; Dingus, B. L.; Ellsworth, R. W.; Galbraith-Frew, J. G.; Gonzalez, M. M.; Goodman, J. A.; Hoffman, C. M.; Hüntemeyer, P. H.; Hui, C. M.; Kolterman, B. E.; Linnemann, J. T.; McEnery, J. E.; Mincer, A. I.; Morgan, T.; Nemethy, P.; Pretz, J.; Ryan, J. M.; Saz Parkinson, P. M.; Shoup, A.; Sinnis, G.; Smith, A. J.; Vasileiou, V.; Walker, G. P.; Williams, D. A.; Yodh, G. B.
2012-07-01
The Cygnus region is a very bright and complex portion of the TeV sky, host to unidentified sources and a diffuse excess with respect to conventional cosmic-ray propagation models. Two of the brightest TeV sources, MGRO J2019+37 and MGRO J2031+41, are analyzed using Milagro data with a new technique, and their emission is tested under two different spectral assumptions: a power law and a power law with an exponential cutoff. The new analysis technique is based on an energy estimator that uses the fraction of photomultiplier tubes in the observatory that detect the extensive air shower. The photon spectrum is measured in the range 1-100 TeV using the last three years of Milagro data (2005-2008), with the detector in its final configuration. An F-test indicates that MGRO J2019+37 is better fit by a power law with an exponential cutoff than by a simple power law. The best-fitting parameters for the power law with exponential cutoff model are a normalization at 10 TeV of 7+5 -2 × 10-10 s-1 m-2 TeV-1, a spectral index of 2.0+0.5 -1.0, and a cutoff energy of 29+50 -16 TeV. MGRO J2031+41 shows no evidence of a cutoff. The best-fitting parameters for a power law are a normalization of 2.1+0.6 -0.6 × 10-10 s-1 m-2 TeV-1 and a spectral index of 3.22+0.23 -0.18. The overall flux is subject to a ~30% systematic uncertainty. The systematic uncertainty on the power-law indices is ~0.1. Both uncertainties have been verified with cosmic-ray data. A comparison with previous results from TeV J2032+4130, MGRO J2031+41, and MGRO J2019+37 is also presented.
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Small-Scale, Local Area, and Transitional Millimeter Wave Propagation for 5G Communications
NASA Astrophysics Data System (ADS)
Rappaport, Theodore S.; MacCartney, George R.; Sun, Shu; Yan, Hangsong; Deng, Sijia
2017-12-01
This paper studies radio propagation mechanisms that impact handoffs, air interface design, beam steering, and MIMO for 5G mobile communication systems. Knife edge diffraction (KED) and a creeping wave linear model are shown to predict diffraction loss around typical building objects from 10 to 26 GHz, and human blockage measurements at 73 GHz are shown to fit a double knife-edge diffraction (DKED) model which incorporates antenna gains. Small-scale spatial fading of millimeter wave received signal voltage amplitude is generally Ricean-distributed for both omnidirectional and directional receive antenna patterns under both line-of-sight (LOS) and non-line-of-sight (NLOS) conditions in most cases, although the log-normal distribution fits measured data better for the omnidirectional receive antenna pattern in the NLOS environment. Small-scale spatial autocorrelations of received voltage amplitudes are shown to fit sinusoidal exponential and exponential functions for LOS and NLOS environments, respectively, with small decorrelation distances of 0.27 cm to 13.6 cm (smaller than the size of a handset) that are favorable for spatial multiplexing. Local area measurements using cluster and route scenarios show how the received signal changes as the mobile moves and transitions from LOS to NLOS locations, with reasonably stationary signal levels within clusters. Wideband mmWave power levels are shown to fade from 0.4 dB/ms to 40 dB/s, depending on travel speed and surroundings.
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
SU-F-I-63: Relaxation Times of Lipid Resonances in NAFLD Animal Model Using Enhanced Curve Fitting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, K-H; Yoo, C-H; Lim, S-I
Purpose: The objective of this study is to evaluate the relaxation time of methylene resonance in comparison with other lipid resonances. Methods: The examinations were performed on a 3.0T MRI scanner using a four-channel animal coil. Eight more Sprague-Dawley rats in the same baseline weight range were housed with ad libitum access to water and a high-fat (HF) diet (60% fat, 20% protein, and 20% carbohydrate). In order to avoid large blood vessels, a voxel (0.8×0.8×0.8 cm{sup 3}) was placed in a homogeneous area of the liver parenchyma during free breathing. Lipid relaxations in NC and HF diet rats weremore » estimated at a fixed repetition time (TR) of 6000 msec, and multi echo time (TEs) of 40–220 msec. All spectra for data measurement were processed using the Advanced Method for Accurate, Robust, and Efficient Spectral (AMARES) fitting algorithm of the Java-based Magnetic Resonance User Interface (jMRUI) package. Results: The mean T2 relaxation time of the methylene resonance in normal-chow diet was 37.1 msec (M{sub 0}, 2.9±0.5), with a standard deviation of 4.3 msec. The mean T2 relaxation time of the methylene resonance was 31.4 msec (M{sub 0}, 3.7±0.3), with a standard deviation of 1.8 msec. The T2 relaxation times of methylene protons were higher in normal-chow diet rats than in HF rats (p<0.05), and the extrapolated M{sub 0} values were higher in HF rats than in NC rats (p<0.005). The excellent linear fit with R{sup 2}>0.9971 and R{sup 2}>0.9987 indicates T2 relaxation decay curves with mono-exponential function. Conclusion: In in vivo, a sufficient spectral resolution and a sufficiently high signal-to-noise ratio (SNR) can be achieved, so that the data measured over short TE values can be extrapolated back to TE = 0 to produce better estimates of the relative weights of the spectral components. In the short term, treating the effective decay rate as exponential is an adequate approximation.« less
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
Fluorescence and afterglow of Ca2Sn2Al2O9:Mn2+
NASA Astrophysics Data System (ADS)
Takemoto, Minoru; Iseki, Takahiro
2018-03-01
By using a polymerized complex method, we synthesized manganese (Mn)-doped Ca2Sn2Al2O9, which exhibits yellow fluorescence and afterglow at room temperature when excited by UV radiation. The material emits a broad, featureless fluorescence band centered at 564 nm, which we attribute to the presence of Mn2+ ions. The afterglow decay is well fit by a power-law function, rather than an exponential function. In addition, thermoluminescence analyses demonstrate that two different types of electron traps form in this material. Based on experimental results, we conclude that the fluorescence and afterglow both result from thermally assisted tunneling, in which trapped electrons are thermally excited to higher-level traps and subsequently tunnel to recombination centers.
Coherent spin transport through a 350 micron thick silicon wafer.
Huang, Biqin; Monsma, Douwe J; Appelbaum, Ian
2007-10-26
We use all-electrical methods to inject, transport, and detect spin-polarized electrons vertically through a 350-micron-thick undoped single-crystal silicon wafer. Spin precession measurements in a perpendicular magnetic field at different accelerating electric fields reveal high spin coherence with at least 13pi precession angles. The magnetic-field spacing of precession extrema are used to determine the injector-to-detector electron transit time. These transit time values are associated with output magnetocurrent changes (from in-plane spin-valve measurements), which are proportional to final spin polarization. Fitting the results to a simple exponential spin-decay model yields a conduction electron spin lifetime (T1) lower bound in silicon of over 500 ns at 60 K.
NASA Astrophysics Data System (ADS)
Varotsos, Costas A.; Efstathiou, Maria N.
2017-05-01
A substantial weakness of several climate studies on long-range dependence is the conclusion of long-term memory of the climate conditions, without considering it necessary to establish the power-law scaling and to reject a simple exponential decay of the autocorrelation function. We herewith show one paradigmatic case, where a strong long-range dependence could be wrongly inferred from incomplete data analysis. We firstly apply the DFA method on the solar and volcanic forcing time series over the tropical Pacific, during the past 1000 years and the results obtained show that a statistically significant straight line fit to the fluctuation function in a log-log representation is revealed with slope higher than 0.5, which wrongly may be assumed as an indication of persistent long-range correlations in the time series. We argue that the long-range dependence cannot be concluded just from this straight line fit, but it requires the fulfilment of the two additional prerequisites i.e. reject the exponential decay of the autocorrelation function and establish the power-law scaling. In fact, the investigation of the validity of these prerequisites showed that the DFA exponent higher than 0.5 does not justify the existence of persistent long-range correlations in the temporal evolution of the solar and volcanic forcing during last millennium. In other words, we show that empirical analyses, based on these two prerequisites must not be considered as panacea for a direct proof of scaling, but only as evidence that the scaling hypothesis is plausible. We also discuss the scaling behaviour of solar and volcanic forcing data based on the Haar tool, which recently proved its ability to reliably detect the existence of the scaling effect in climate series.
Stokes, Ian A F; Laible, Jeffrey P; Gardner-Morse, Mack G; Costi, John J; Iatridis, James C
2011-01-01
Intervertebral disks support compressive forces because of their elastic stiffness as well as the fluid pressures resulting from poroelasticity and the osmotic (swelling) effects. Analytical methods can quantify the relative contributions, but only if correct material properties are used. To identify appropriate tissue properties, an experimental study and finite element analytical simulation of poroelastic and osmotic behavior of intervertebral disks were combined to refine published values of disk and endplate properties to optimize model fit to experimental data. Experimentally, nine human intervertebral disks with adjacent hemi-vertebrae were immersed sequentially in saline baths having concentrations of 0.015, 0.15, and 1.5 M and the loss of compressive force at constant height (force relaxation) was recorded over several hours after equilibration to a 300-N compressive force. Amplitude and time constant terms in exponential force-time curve-fits for experimental and finite element analytical simulations were compared. These experiments and finite element analyses provided data dependent on poroelastic and osmotic properties of the disk tissues. The sensitivities of the model to alterations in tissue material properties were used to obtain refined values of five key material parameters. The relaxation of the force in the three bath concentrations was exponential in form, expressed as mean compressive force loss of 48.7, 55.0, and 140 N, respectively, with time constants of 1.73, 2.78, and 3.40 h. This behavior was analytically well represented by a model having poroelastic and osmotic tissue properties with published tissue properties adjusted by multiplying factors between 0.55 and 2.6. Force relaxation and time constants from the analytical simulations were most sensitive to values of fixed charge density and endplate porosity.
Stokes, Ian A. F.; Laible, Jeffrey P.; Gardner-Morse, Mack G.; Costi, John J.; Iatridis, James C.
2011-01-01
Intervertebral disks support compressive forces because of their elastic stiffness as well as the fluid pressures resulting from poroelasticity and the osmotic (swelling) effects. Analytical methods can quantify the relative contributions, but only if correct material properties are used. To identify appropriate tissue properties, an experimental study and finite element analytical simulation of poroelastic and osmotic behavior of intervertebral disks were combined to refine published values of disk and endplate properties to optimize model fit to experimental data. Experimentally, nine human intervertebral disks with adjacent hemi-vertebrae were immersed sequentially in saline baths having concentrations of 0.015, 0.15, and 1.5 M and the loss of compressive force at constant height (force relaxation) was recorded over several hours after equilibration to a 300-N compressive force. Amplitude and time constant terms in exponential force–time curve-fits for experimental and finite element analytical simulations were compared. These experiments and finite element analyses provided data dependent on poroelastic and osmotic properties of the disk tissues. The sensitivities of the model to alterations in tissue material properties were used to obtain refined values of five key material parameters. The relaxation of the force in the three bath concentrations was exponential in form, expressed as mean compressive force loss of 48.7, 55.0, and 140 N, respectively, with time constants of 1.73, 2.78, and 3.40 h. This behavior was analytically well represented by a model having poroelastic and osmotic tissue properties with published tissue properties adjusted by multiplying factors between 0.55 and 2.6. Force relaxation and time constants from the analytical simulations were most sensitive to values of fixed charge density and endplate porosity. PMID:20711754
Gotow, Naomi; Moritani, Ami; Hayakawa, Yoshinobu; Akutagawa, Akihito; Hashimoto, Hiroshi; Kobayakawa, Tatsu
2015-06-01
In order to develop products that are acceptable to consumers, it is necessary to incorporate consumers' intentions into products' characteristics. Therefore, investigation of consumers' perceptions of the taste or smell of common beverages provides information that should be useful in predicting market responses. In this study, we sought to develop a time-intensity evaluation system for consumer panels. Using our system, we performed time-intensity evaluation of flavor attributes (bitterness and retronasal aroma) that consumers perceived after swallowing a coffee beverage. Additionally, we developed quantitative evaluation methods for determining whether consumer panelists can properly perform time-intensity evaluation. In every trial, we fitted an exponential function to measured intensity data for bitterness and retronasal aroma. The correlation coefficients between measured time-intensity data and the fitted exponential curves were greater than 0.8 in about 90% of trials, indicating that we had successfully developed a time-intensity system for use with consumer panelists, even after just a single training trial using a nontrained consumer. We classified participants into two groups based on their consumption of canned coffee beverages. Significant difference was observed in only AUC of sensory modality (bitterness compared with retronasal aroma) among conventional TI parameters using two-way ANOVA. However, three-way ANOVA including a time course revealed significant difference between bitterness and retronasal aroma in the high-consumption group. Moreover, the high-consumption group more easily discriminated between bitterness and retronasal aroma than the low-consumption group. This finding implied that manufacturers should select consumer panelists who are suitable for their concepts of new products. © 2015 Institute of Food Technologists®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burm, A.G.; Van Kleef, J.W.; Vermeulen, N.P.
1988-10-01
The pharmacokinetics of lidocaine and bupivacaine following subarachnoid administration were studied in 12 surgical patients using a stable isotope method. After subarachnoid administration of the agent to be evaluated, a deuterium-labelled analogue was administered intravenously. Blood samples were collected for 24 h. Plasma concentrations of the unlabelled and the deuterium-labelled local anesthetics were determined using a combination of capillary gas chromatography and mass fragmentography. Bi-exponential functions were fitted to the plasma concentration-time data of the deuterium-labelled local anesthetics. The progression of the absorption was evaluated using deconvolution. Mono- and bi-exponential functions were then fitted to the fraction absorbed versus timemore » data. The distribution and elimination half-lives of the deuterium-labelled analogues were 25 +/- 13 min (mean +/- SD) and 121 +/- 31 min for lidocaine and 19 +/- 10 min and 131 +/- 33 min for bupivacaine. The volumes of the central compartment and steady-state volumes of distribution were: lidocaine 57 +/- 10 l and 105 +/- 25 l, bupivacaine 25 +/- 6 l and 63 +/- 22 l. Total plasma clearance values averaged 0.97 +/- 0.21 l/min for lidocaine and 0.56 +/- 0.14 l/min for bupivacaine. The absorption of lidocaine could be described by a single first order absorption process, characterized by a half-life of 71 +/- 17 min in five out of six patients. The absorption of bupivacaine could be described adequately assuming two parallel first order absorption processes in all six patients. The half-lives, characterizing the fast and slow absorption processes of bupivacaine, were 50 +/- 27 min and 408 +/- 275 min, respectively. The fractions of the dose, absorbed in the fast and slow processes, were 0.35 +/- 0.17 and 0.61 +/- 0.16, respectively.« less
Short-time vibrational dynamics of metaphosphate glasses
NASA Astrophysics Data System (ADS)
Kalampounias, Angelos G.
2012-02-01
In this paper we present the picosecond vibrational dynamics of a series of binary metaphosphate glasses, namely Na2O-P2O5, MO-P2O5 (M=Ba, Sr, Ca, Mg) and Al2O3-3P2O5 by means of Raman spectroscopy. We studied the vibrational dephasing and vibrational frequency modulation by calculating time correlation functions of vibrational relaxation by fits in the frequency domain. The fitting method used enables one to model the real line profiles intermediate between Lorentzian and Gaussian by an analytical function, which has an analytical counterpart in the time domain. The symmetric stretching modes νs(PO2-) and νs(P-O-P) of the PO2- entity of PØ2O2- units and of P-O-P bridges in metaphosphate arrangements have been investigated by Raman spectroscopy and we used them as probes of the dynamics of these glasses. The vibrational time correlation functions of both modes studied are rather adequately interpreted within the assumption of exponential modulation function in the context of Kubo-Rothschield theory and indicate that the system experiences an intermediate dynamical regime that gets only slower with an increase in the ionic radius of the cation-modifier. We found that the vibrational correlation functions of all glasses studied comply with the Rothschild approach assuming that the environmental modulation is described by a stretched exponential decay. The evolution of the dispersion parameter α with increasing ionic radius of the cation indicates the deviation from the model simple liquid indicating the reduction of the coherence decay in the perturbation potential as a result of local short lived aggregates. The results are discussed in the framework of the current phenomenological status of the field.
Drewniak, Elizabeth I.; Jay, Gregory D.; Fleming, Braden C.; Crisco, Joseph J.
2009-01-01
In attempts to better understand the etiology of osteoarthritis, a debilitating joint disease that results in the degeneration of articular cartilage in synovial joints, researchers have focused on joint tribology, the study of joint friction, lubrication, and wear. Several different approaches have been used to investigate the frictional properties of articular cartilage. In this study, we examined two analysis methods for calculating the coefficient of friction (μ) using a simple pendulum system and BL6 murine knee joints (n=10) as the fulcrum. A Stanton linear decay model (Lin μ) and an exponential model that accounts for viscous damping (Exp μ) were fit to the decaying pendulum oscillations. Root mean square error (RMSE), asymptotic standard error (ASE), and coefficient of variation (CV) were calculated to evaluate the fit and measurement precision of each model. This investigation demonstrated that while Lin μ was more repeatable, based on CV (5.0% for Lin μ; 18% for Exp μ), Exp μ provided a better fitting model, based on RMSE (0.165° for Exp μ; 0.391° for Lin μ) and ASE (0.033 for Exp μ; 0.185 for Lin μ), and had a significantly lower coefficient of friction value (0.022±0.007 for Exp μ; 0.042±0.016 for Lin μ) (p=0.001). This study details the use of a simple pendulum for examining cartilage properties in situ that will have applications investigating cartilage mechanics in a variety of species. The Exp μ model provided a more accurate fit to the experimental data for predicting the frictional properties of intact joints in pendulum systems. PMID:19632680
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco
2014-01-01
Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.
Optical study of HgCdTe infrared photodetectors using internal photoemission spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lao, Yan-Feng; Unil Perera, A. G., E-mail: uperera@gsu.edu; Wijewarnasuriya, Priyalal S.
2014-03-31
We report a study of internal photoemission spectroscopy (IPE) applied to a n-type Hg{sub 1−x}Cd{sub x}Te/Hg{sub 1−y}Cd{sub y}Te heterojunction. An exponential line-shape of the absorption tail in HgCdTe is identified by IPE fittings of the near-threshold quantum yield spectra. The reduction of quantum yield (at higher photon energy) below the fitting value is explained as a result of carrier-phonon scatterings. In addition, the obtained bias independence of the IPE threshold indicates a negligible electron barrier at the heterojunction interface.
Development of a winter wheat adjustable crop calendar model
NASA Technical Reports Server (NTRS)
Baker, J. R. (Principal Investigator)
1978-01-01
The author has identified the following significant results. After parameter estimation, tests were conducted with variances from the fits, and on independent data. From these tests, it was generally concluded that exponential functions have little advantage over polynomials. Precipitation was not found to significantly affect the fits. The Robertson's triquadratic form, in general use for spring wheat, was found to show promise for winter wheat, but special techniques and care were required for its use. In most instances, equations with nonlinear effects were found to yield erratic results when utilized with daily environmental values as independent variables.
2008-08-01
the distribution of DNAPL. The OSU research team evaluated the use of radon as a partitioning groundwater tracer. The DNAPL release fulfilled one...close to the source area generated more PCE equivalent mass over time. The exponential decay from the fitted line (predicted PCE, orange line in each
Spatial correlations and exact solution of the problem of the boson peak profile in amorphous media
NASA Astrophysics Data System (ADS)
Kirillov, Sviatoslav A.; A. Voyiatzis, George; Kolomiyets, Tatiana M.; H. Anastasiadis, Spiros
1999-11-01
Based on a model correlation function which covers spatial correlations from Gaussian to exponential, we have arrived at an exact analytic solution of the problem of the Boson peak profile in amorphous media. Probe fits made for polyisoprene and triacetin prove the working ability of the formulae obtained.
Decomposition rates for hand-piled fuels
Clinton S. Wright; Alexander M. Evans; Joseph C. Restaino
2017-01-01
Hand-constructed piles in eastern Washington and north-central New Mexico were weighed periodically between October 2011 and June 2015 to develop decay-rate constants that are useful for estimating the rate of piled biomass loss over time. Decay-rate constants (k) were determined by fitting negative exponential curves to time series of pile weight for each site. Piles...
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Rainfall continuous time stochastic simulation for a wet climate in the Cantabric Coast
NASA Astrophysics Data System (ADS)
Rebole, Juan P.; Lopez, Jose J.; Garcia-Guzman, Adela
2010-05-01
Rain is the result of a series of complex atmospheric processes which are influenced by numerous factors. This complexity makes its simulation practically unfeasible from a physical basis, advising the use of stochastic diagrams. These diagrams, which are based on observed characteristics (Todorovic and Woolhiser, 1975), allow the introduction of renewal alternating processes, that account for the occurrence of rainfall at different time lapses (Markov chains are a particular case, where lapses can be described by exponential distributions). Thus, a sequential rainfall process can be defined as a temporal series in which rainfall events (periods in which rainfall is recorded) alternate with non rain events (periods in which no rainfall is recorded). The variables of a temporal rain sequence have been characterized (duration of the rainfall event, duration of the non rainfall event, average intensity of the rain in the rain event, and a temporal distribution of the amount of rain in the rain event) in a wet climate such as that of the coastal area of Guipúzcoa. The study has been performed from two series recorded at the meteorological stations of Igueldo-San Sebastián and Fuenterrabia / Airport (data every ten minutes and for its hourly aggregation). As a result of this work, the variables satisfactorily fitted the following distribution functions: the duration of the rain event to a exponential function; the duration of the dry event to a truncated exponential mixed distribution; the average intensity to a Weibull distribution; and the distribution of the rain fallen to the Beta distribution. The characterization was made for an hourly aggregation of the recorded interval of ten minutes. The parameters of the fitting functions were better obtained by means of the maximum likelihood method than the moment method. The parameters obtained from the characterization were used to develop a stochastic rainfall process simulation model by means of a three states Markov chain (Hutchinson, 1990), performed in an hourly basis by García-Guzmán (1993) and Castro et al. (1997, 2005 ). Simulation process results were valid in the hourly case for all the four described variables, with a slightly better response in Fuenterrabia than in Igueldo. In summary, all the variables were better simulated in Fuenterrabia than in Igueldo. Fuenterrabia data series is shorter and with longer sequences without missing data, compared to Igueldo. The latter shows higher number of missing data events, whereas its mean duration is longer in Fuenterrabia.
Interpreting the Weibull fitting parameters for diffusion-controlled release data
NASA Astrophysics Data System (ADS)
Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.
2017-11-01
We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
NASA Astrophysics Data System (ADS)
Tchakoua, Théophile; Nkot Nkot, Pierre René; Fifen, Jean Jules; Nsangou, Mama; Motapon, Ousmanou
2018-06-01
We present the first potential energy surface (PES) for the AlO(X2Σ+)-He(1 S) van der Waals complex. This PES has been calculated at the RCCSD(T) level of theory. The mixed Gaussian/Exponential Extrapolation Scheme of complete basis set [CBS(D,T,Q)] was employed. The PES was fitted using global analytical method. This fitted PES was used subsequently in the close-coupling approach for the computation of the state-to-state collisional excitation cross sections of the fine-structure levels of the AlO-He complex. Collision energies were taken up to 2500 cm-1 and they yield after thermal averaging, state-to-state rate coefficients up to 300 K. The propensity rules between the lowest fine-structure levels were studied. These rules show, on one hand, a strong propensity in favour of odd ΔN transitions, and on the other hand, that cross sections and collisional rate coefficients for Δj = ΔN transitions are larger than those for Δj ≠ ΔN transitions.
NASA Astrophysics Data System (ADS)
D'Onofrio, M.
2001-10-01
In this paper we analyse the results of the two-dimensional (2D) fit of the light distribution of 73 early-type galaxies belonging to the Virgo and Fornax clusters, a sample volume- and magnitude-limited down to MB=-17.3, and highly homogeneous. In our previous paper (Paper I) we have presented the adopted 2D models of the surface-brightness distribution - namely the r1/n and (r1/n+exp) models - we have discussed the main sources of error affecting the structural parameters, and we have tested the ability of the chosen minimization algorithm (MINUIT) in determining the fitting parameters using a sample of artificial galaxies. We show that, with the exception of 11 low-luminosity E galaxies, the best fit of the real galaxy sample is always achieved with the two-component (r1/n+exp) model. The improvement in the χ2 due to the addition of the exponential component is found to be statistically significant. The best fit is obtained with the exponent n of the generalized r1/n Sersic law different from the classical de Vaucouleurs value of 4. Nearly 42 per cent of the sample have n<2, suggesting the presence of exponential `bulges' also in early-type galaxies. 20 luminous E galaxies are fitted by the two-component model, with a small central exponential structure (`disc') and an outer big spheroid with n>4. We believe that this is probably due to their resolved core. The resulting scalelengths Rh and Re of each component peak approximately at ~1 and ~2kpc, respectively, although with different variances in their distributions. The ratio Re/Rh peaks at ~0.5, a value typical for normal lenticular galaxies. The first component, represented by the r1/n law, is probably made of two distinct families, `ordinary' and `bright', on the basis of their distribution in the μe-log(Re) plane, a result already suggested by Capaccioli, Caon and D'Onofrio. The bulges of spirals and S0 galaxies belong to the `ordinary' family, while the large spheroids of luminous E galaxies form the `bright' family. The second component, represented by the exponential law, also shows a wide distribution in the μ0c-log(Rh) plane. Small discs (or cores) have short scalelengths and high central surface brightness, while normal lenticulars and spiral galaxies generally have scalelengths higher than 0.5kpc and central surface brightness brighter than 20magarcsec-2 (in the B band). The scalelengths Re and Rh of the `bulge' and `disc' components are probably correlated, indicating that a self-regulating mechanism of galaxy formation may be at work. Alternatively, two regions of the Re-Rh plane are avoided by galaxies due to dynamical instability effects. The bulge-to-disc (B/D) ratio seems to vary uniformly along the Hubble sequence, going from late-type spirals to E galaxies. At the end of the sequence the ratio between the large spheroidal component and the small inner core can reach B/D~100.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
Quantification technology study on flaws in steam-filled pipelines based on image processing
NASA Astrophysics Data System (ADS)
Sun, Lina; Yuan, Peixin
2009-07-01
Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw, 135KV used the X-ray source on the testing. Test results show that X-ray image processing method, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.
Quantification technology study on flaws in steam-filled pipelines based on image processing
NASA Astrophysics Data System (ADS)
Yuan, Pei-xin; Cong, Jia-hui; Chen, Bo
2008-03-01
Starting from exploiting the applied detection system of gas transmission pipeline, a set of X-ray image processing methods and pipeline flaw quantificational evaluation methods are proposed. Defective and non-defective strings and rows in gray image were extracted and oscillogram was obtained. We can distinguish defects in contrast with two gray images division. According to the gray value of defects with different thicknesses, the gray level depth curve is founded. Through exponential and polynomial fitting way to obtain the attenuation mathematical model which the beam penetrates pipeline, thus attain flaw deep dimension. This paper tests on the PPR pipe in the production of simulated holes flaw and cracks flaw. The X-ray source tube voltage was selected as 130kv and valve current was 1.5mA.Test results show that X-ray image processing methods, which meet the needs of high efficient flaw detection and provide quality safeguard for thick oil recovery, can be used successfully in detecting corrosion of insulated pipe.
A Novel Method for Measuring Electrical Conductivity of High Insulating Oil Using Charge Decay
NASA Astrophysics Data System (ADS)
Wang, Z. Q.; Qi, P.; Wang, D. S.; Wang, Y. D.; Zhou, W.
2016-05-01
For the high insulating oil, it is difficult to measure the conductivity precisely using voltammetry method. A high-precision measurementis proposed for measuring bulk electrical conductivity of high insulating oils (about 10-9--10-15S/m) using charge decay. The oil is insulated and charged firstly, and then grounded fully. During the experimental procedure, charge decay is observed to show an exponential law according to "Ohm" theory. The data of time dependence of charge density is automatically recorded using an ADAS and a computer. Relaxation time constant is fitted from the data using Gnuplot software. The electrical conductivity is calculated using relaxation time constant and dielectric permittivity. Charge density is substituted by electric potential, considering charge density is difficult to measure. The conductivity of five kinds of oils is measured. Using this method, the conductivity of diesel oil is easily measured to beas low as 0.961 pS/m, as shown in Fig. 5.
The Population Tracking Model: A Simple, Scalable Statistical Model for Neural Population Data
O'Donnell, Cian; alves, J. Tiago Gonç; Whiteley, Nick; Portera-Cailliau, Carlos; Sejnowski, Terrence J.
2017-01-01
Our understanding of neural population coding has been limited by a lack of analysis methods to characterize spiking data from large populations. The biggest challenge comes from the fact that the number of possible network activity patterns scales exponentially with the number of neurons recorded (∼2Neurons). Here we introduce a new statistical method for characterizing neural population activity that requires semi-independent fitting of only as many parameters as the square of the number of neurons, requiring drastically smaller data sets and minimal computation time. The model works by matching the population rate (the number of neurons synchronously active) and the probability that each individual neuron fires given the population rate. We found that this model can accurately fit synthetic data from up to 1000 neurons. We also found that the model could rapidly decode visual stimuli from neural population data from macaque primary visual cortex about 65 ms after stimulus onset. Finally, we used the model to estimate the entropy of neural population activity in developing mouse somatosensory cortex and, surprisingly, found that it first increases, and then decreases during development. This statistical model opens new options for interrogating neural population data and can bolster the use of modern large-scale in vivo Ca2+ and voltage imaging tools. PMID:27870612
Bateson, Thomas F; Kopylev, Leonid
2015-01-01
Recent meta-analyses of occupational epidemiology studies identified two important exposure data quality factors in predicting summary effect measures for asbestos-associated lung cancer mortality risk: sufficiency of job history data and percent coverage of work history by measured exposures. The objective was to evaluate different exposure parameterizations suggested in the asbestos literature using the Libby, MT asbestos worker cohort and to evaluate influences of exposure measurement error caused by historically estimated exposure data on lung cancer risks. Focusing on workers hired after 1959, when job histories were well-known and occupational exposures were predominantly based on measured exposures (85% coverage), we found that cumulative exposure alone, and with allowance of exponential decay, fit lung cancer mortality data similarly. Residence-time-weighted metrics did not fit well. Compared with previous analyses based on the whole cohort of Libby workers hired after 1935, when job histories were less well-known and exposures less frequently measured (47% coverage), our analyses based on higher quality exposure data yielded an effect size as much as 3.6 times higher. Future occupational cohort studies should continue to refine retrospective exposure assessment methods, consider multiple exposure metrics, and explore new methods of maintaining statistical power while minimizing exposure measurement error.
On the Existence of Step-To-Step Breakpoint Transitions in Accelerated Sprinting
McGhie, David; Danielsen, Jørgen; Sandbakk, Øyvind; Haugen, Thomas
2016-01-01
Accelerated running is characterised by a continuous change of kinematics from one step to the next. It has been argued that breakpoints in the step-to-step transitions may occur, and that these breakpoints are an essential characteristic of dynamics during accelerated running. We examined this notion by comparing a continuous exponential curve fit (indicating continuity, i.e., smooth transitions) with linear piecewise fitting (indicating breakpoint). We recorded the kinematics of 24 well trained sprinters during a 25 m sprint run with start from competition starting blocks. Kinematic data were collected for 24 anatomical landmarks in 3D, and the location of centre of mass (CoM) was calculated from this data set. The step-to-step development of seven variables (four related to CoM position, and ground contact time, aerial time and step length) were analysed by curve fitting. In most individual sprints (in total, 41 sprints were successfully recorded) no breakpoints were identified for the variables investigated. However, for the mean results (i.e., the mean curve for all athletes) breakpoints were identified for the development of vertical CoM position, angle of acceleration and distance between support surface and CoM. It must be noted that for these variables the exponential fit showed high correlations (r2>0.99). No relationship was found between the occurrences of breakpoints for different variables as investigated using odds ratios (Mantel-Haenszel Chi-square statistic). It is concluded that although breakpoints regularly appear during accelerated running, these are not the rule and thereby unlikely a fundamental characteristic, but more likely an expression of imperfection of performance. PMID:27467387
Bannon, Catherine C; Campbell, Douglas A
2017-01-01
Diatoms are marine primary producers that sink in part due to the density of their silica frustules. Sinking of these phytoplankters is crucial for both the biological pump that sequesters carbon to the deep ocean and for the life strategy of the organism. Sinking rates have been previously measured through settling columns, or with fluorimeters or video microscopy arranged perpendicularly to the direction of sinking. These side-view techniques require large volumes of culture, specialized equipment and are difficult to scale up to multiple simultaneous measures for screening. We established a method for parallel, large scale analysis of multiple phytoplankton sinking rates through top-view monitoring of chlorophyll a fluorescence in microtitre well plates. We verified the method through experimental analysis of known factors that influence sinking rates, including exponential versus stationary growth phase in species of different cell sizes; Thalassiosira pseudonana CCMP1335, chain-forming Skeletonema marinoi RO5A and Coscinodiscus radiatus CCMP312. We fit decay curves to an algebraic transform of the decrease in fluorescence signal as cells sank away from the fluorometer detector, and then used minimal mechanistic assumptions to extract a sinking rate (m d-1) using an RStudio script, SinkWORX. We thereby detected significant differences in sinking rates as larger diatom cells sank faster than smaller cells, and cultures in stationary phase sank faster than those in exponential phase. Our sinking rate estimates accord well with literature values from previously established methods. This well plate-based method can operate as a high throughput integrative phenotypic screen for factors that influence sinking rates including macromolecular allocations, nutrient availability or uptake rates, chain-length or cell size, degree of silification and progression through growth stages. Alternately the approach can be used to phenomically screen libraries of mutants.
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
Boatwright, J.; Bundock, H.; Luetgert, J.; Seekins, L.; Gee, L.; Lombard, P.
2003-01-01
We analyze peak ground velocity (PGV) and peak ground acceleration (PGA) data from 95 moderate (3.5 ??? M 100 km, the peak motions attenuate more rapidly than a simple power law (that is, r-??) can fit. Instead, we use an attenuation function that combines a fixed power law (r-0.7) with a fitted exponential dependence on distance, which is estimated as expt(-0.0063r) and exp(-0.0073r) for PGV and PGA, respectively, for moderate earthquakes. We regress log(PGV) and log(PGA) as functions of distance and magnitude. We assume that the scaling of log(PGV) and log(PGA) with magnitude can differ for moderate and large earthquakes, but must be continuous. Because the frequencies that carry PGV and PGA can vary with earthquake size for large earthquakes, the regression for large earthquakes incorporates a magnitude dependence in the exponential attenuation function. We fix the scaling break between moderate and large earthquakes at M 5.5; log(PGV) and log(PGA) scale as 1.06M and 1.00M, respectively, for moderate earthquakes and 0.58M and 0.31M for large earthquakes.
Empirical verification of evolutionary theories of aging.
Kyryakov, Pavlo; Gomez-Perez, Alejandra; Glebov, Anastasia; Asbah, Nimara; Bruno, Luigi; Meunier, Carolynne; Iouk, Tatiana; Titorenko, Vladimir I
2016-10-25
We recently selected 3 long-lived mutant strains of Saccharomyces cerevisiae by a lasting exposure to exogenous lithocholic acid. Each mutant strain can maintain the extended chronological lifespan after numerous passages in medium without lithocholic acid. In this study, we used these long-lived yeast mutants for empirical verification of evolutionary theories of aging. We provide evidence that the dominant polygenic trait extending longevity of each of these mutants 1) does not affect such key features of early-life fitness as the exponential growth rate, efficacy of post-exponential growth and fecundity; and 2) enhances such features of early-life fitness as susceptibility to chronic exogenous stresses, and the resistance to apoptotic and liponecrotic forms of programmed cell death. These findings validate evolutionary theories of programmed aging. We also demonstrate that under laboratory conditions that imitate the process of natural selection within an ecosystem, each of these long-lived mutant strains is forced out of the ecosystem by the parental wild-type strain exhibiting shorter lifespan. We therefore concluded that yeast cells have evolved some mechanisms for limiting their lifespan upon reaching a certain chronological age. These mechanisms drive the evolution of yeast longevity towards maintaining a finite yeast chronological lifespan within ecosystems.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
δ-exceedance records and random adaptive walks
NASA Astrophysics Data System (ADS)
Park, Su-Chan; Krug, Joachim
2016-08-01
We study a modified record process where the kth record in a series of independent and identically distributed random variables is defined recursively through the condition {Y}k\\gt {Y}k-1-{δ }k-1 with a deterministic sequence {δ }k\\gt 0 called the handicap. For constant {δ }k\\equiv δ and exponentially distributed random variables it has been shown in previous work that the process displays a phase transition as a function of δ between a normal phase where the mean record value increases indefinitely and a stationary phase where the mean record value remains bounded and a finite fraction of all entries are records (Park et al 2015 Phys. Rev. E 91 042707). Here we explore the behavior for general probability distributions and decreasing and increasing sequences {δ }k, focusing in particular on the case when {δ }k matches the typical spacing between subsequent records in the underlying simple record process without handicap. We find that a continuous phase transition occurs only in the exponential case, but a novel kind of first order transition emerges when {δ }k is increasing. The problem is partly motivated by the dynamics of evolutionary adaptation in biological fitness landscapes, where {δ }k corresponds to the change of the deterministic fitness component after k mutational steps. The results for the record process are used to compute the mean number of steps that a population performs in such a landscape before being trapped at a local fitness maximum.
Bælum, Jacob; Prestat, Emmanuel; David, Maude M.; Strobel, Bjarne W.
2012-01-01
Mineralization potentials, rates, and kinetics of the three phenoxy acid (PA) herbicides, 2,4-dichlorophenoxyacetic acid (2,4-D), 4-chloro-2-methylphenoxyacetic acid (MCPA), and 2-(4-chloro-2-methylphenoxy)propanoic acid (MCPP), were investigated and compared in 15 soils collected from five continents. The mineralization patterns were fitted by zero/linear or exponential growth forms of the three-half-order models and by logarithmic (log), first-order, or zero-order kinetic models. Prior and subsequent to the mineralization event, tfdA genes were quantified using real-time PCR to estimate the genetic potential for degrading PA in the soils. In 25 of the 45 mineralization scenarios, ∼60% mineralization was observed within 118 days. Elevated concentrations of tfdA in the range 1 × 105 to 5 × 107 gene copies g−1 of soil were observed in soils where mineralization could be described by using growth-linked kinetic models. A clear trend was observed that the mineralization rates of the three PAs occurred in the order 2,4-D > MCPA > MCPP, and a correlation was observed between rapid mineralization and soils exposed to PA previously. Finally, for 2,4-D mineralization, all seven mineralization patterns which were best fitted by the exponential model yielded a higher tfdA gene potential after mineralization had occurred than the three mineralization patterns best fitted by the Lin model. PMID:22635998
NASA Astrophysics Data System (ADS)
Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Tang, Genyang; Chen, Yangkang
2018-02-01
Seismic waves propagating in the subsurface suffer from attenuation, which can be represented by the quality factor Q. Knowledge of Q plays a vital role in hydrocarbon exploration. Many methods to measure Q have been proposed, among which the central frequency shift (CFS) and the peak frequency shift (PFS) are commonly used. However, both methods are under the assumption of a particular shape for amplitude spectra, which will cause systematic error in Q estimation. Recently a new method to estimate Q has been proposed to overcome this disadvantage by using frequency weighted exponential (FWE) function to fit amplitude spectra of different shapes. In the FWE method, a key procedure is to calculate the central frequency and variance of the amplitude spectrum. However, the amplitude spectrum is susceptible to noise, whereas the power spectrum is less sensitive to random noise and has better anti-noise performance. To enhance the robustness of the FWE method, we propose a novel hybrid method by combining the advantage of the FWE method and the power spectrum, which is called the improved FWE method (IFWE). The basic idea is to consider the attenuation of the power spectrum instead of the amplitude spectrum and to use a modified FWE function to fit power spectra, according to which we derive a new Q estimation formula. Tests of noisy synthetic data show that the IFWE are more robust than the FWE. Moreover, the frequency bandwidth selection in the IFWE can be more flexible than that in the FWE. The application to field vertical seismic profile data and surface seismic data further demonstrates its validity.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
NASA Astrophysics Data System (ADS)
Clarage, James Braun, II
1990-01-01
Methods have been developed for analyzing the diffuse x-ray scattering in the halos about a crystal's Bragg reflections as a means of determining correlations in atomic displacements in protein crystals. The diffuse intensity distribution for rhombohedral insulin, tetragonal lysozyme, and triclinic lysozyme crystals was best simulated in terms of exponential displacement correlation functions. About 90% of the disorder can be accounted for by internal movements correlated with a decay distance of about 6A; the remaining 10% corresponds to intermolecular movements that decay in a distance the order of size of the protein molecule. The results demonstrate that protein crystals fit into neither the Einstein nor the Debye paradigms for thermally fluctuating crystalline solids. Unlike the Einstein model, there are correlations in the atomic displacements, but these correlations decay more steeply with distance than predicted by the Debye-Waller model for an elastic solid. The observed displacement correlations are liquid -like in the sense that they decay exponentially with the distance between atoms, just as positional correlations in a liquid. This liquid-like disorder is similar to the disorder observed in 2-D crystals of polystyrene latex spheres, and similar systems where repulsive interactions dominate; hence, these colloidal crystals appear to provide a better analogy for the dynamics of protein crystals than perfectly elastic lattices.
Nikolo, Martin; Zapf, Vivien S.; Singleton, John; ...
2016-07-22
Vortex dynamics and nonlinear ac response are studied in a Ba(Fe 0.94Ni 0.06) 2As 2( T c= 18.5 K) bulk superconductor in magnetic fields up to 12 T via ac susceptibility measurements of the first ten harmonics. A comprehensive study of the ac magnetic susceptibility and its first ten harmonics finds shifts to higher temperatures with increasing ac measurement frequencies (10 to 10,000 Hz) for a wide range of ac (1, 5, and 10 Oe) and dc fields (0 to 12 T). The characteristic measurement time constant t1 is extracted from the exponential fit of the data and linked tomore » vortex relaxation. The Anderson-Kim Arrhenius law is applied to determine flux activation energy E a/k as a function dc magnetic field. The de-pinning, or irreversibility lines, were determined by a variety of methods and extensively mapped. The ac response shows surprisingly weak higher harmonic components, suggesting weak nonlinear behavior. Lastly, our data does not support the Fisher model; we do not see an abrupt vortex glass to vortex liquid transition and the resistivity does not drop to zero, although it appears to approach zero exponentially.« less
Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)
NASA Technical Reports Server (NTRS)
Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.
1990-01-01
A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.
Campbell, W.H.; Schiffmacher, E.R.
1986-01-01
Spherical harmonic analysis coefficients of the external and internal parts of the quiet-day geomagnetic field variations (Sq), separated for the N American, European, Central Asian and E Asian regions, were used to determine conductivity profiles to depths of about 600km by the Schmucker equivalent-substitute conductor method. All 3 regions showed a roughly exponential increase of conductivity with depth. Distinct discontinuities seemed to be evident near 255-300km and near 450-600km. Regional differences in the conductivity profiles were shown by the functional fittings to the data. For depths less than about 275km, the N American conductivities seemed to be significantly higher than the other regions. For depths greater than about 300km, the E Asian conductivities were largest. -Authors
Approximation of the exponential integral (well function) using sampling methods
NASA Astrophysics Data System (ADS)
Baalousha, Husam Musa
2015-04-01
Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.
Domoshnitsky, Alexander; Maghakyan, Abraham; Berezansky, Leonid
2017-01-01
In this paper a method for studying stability of the equation [Formula: see text] not including explicitly the first derivative is proposed. We demonstrate that although the corresponding ordinary differential equation [Formula: see text] is not exponentially stable, the delay equation can be exponentially stable.
Nonlinear dynamic evolution and control in CCFN with mixed attachment mechanisms
NASA Astrophysics Data System (ADS)
Wang, Jianrong; Wang, Jianping; Han, Dun
2017-01-01
In recent years, wireless communication plays an important role in our lives. Cooperative communication, is used by a mobile station with single antenna to share with each other forming a virtual MIMO antenna system, will become a development with a diversity gain for wireless communication in tendency future. In this paper, a fitness model of evolution network based on complex networks with mixed attachment mechanisms is devised in order to study an actual network-CCFN (cooperative communication fitness network). Firstly, the evolution of CCFN is given by four cases with different probabilities, and the rate equations of nodes degree are presented to analyze the evolution of CCFN. Secondly, the degree distribution is analyzed by calculating the rate equation and numerical simulation with the examples of four fitness distributions such as power law, uniform fitness distribution, exponential fitness distribution and Rayleigh fitness distribution. Finally, the robustness of CCFN is studied by numerical simulation with four fitness distributions under random attack and intentional attack to analyze the effects of degree distribution, average path length and average degree. The results of this paper offers insights for building CCFN systems in order to program communication resources.
Jia, Xianbo; Lin, Xinjian; Chen, Jichen
2017-11-02
Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
Reproducibility of isopach data and estimates of dispersal and eruption volumes
NASA Astrophysics Data System (ADS)
Klawonn, M.; Houghton, B. F.; Swanson, D.; Fagents, S. A.; Wessel, P.; Wolfe, C. J.
2012-12-01
Total erupted volume and deposit thinning relationships are key parameters in characterizing explosive eruptions and evaluating the potential risk from a volcano as well as inputs to volcanic plume models. Volcanologists most commonly estimate these parameters by hand-contouring deposit data, then representing these contours in thickness versus square root area plots, fitting empirical laws to the thinning relationships and integrating over the square root area to arrive at volume estimates. In this study we analyze the extent to which variability in hand-contouring thickness data for pyroclastic fall deposits influences the resulting estimates and investigate the effects of different fitting laws. 96 volcanologists (3% MA students, 19% PhD students, 20% postdocs, 27% professors, and 30% professional geologists) from 11 countries (Australia, Ecuador, France, Germany, Iceland, Italy, Japan, New Zealand, Switzerland, UK, USA) participated in our study and produced hand-contours on identical maps using our unpublished thickness measurements of the Kilauea Iki 1959 fall deposit. We computed volume estimates by (A) integrating over a surface fitted through the contour lines, as well as using the established methods of integrating over the thinning relationships of (B) an exponential fit with one to three segments, (C) a power law fit, and (D) a Weibull function fit. To focus on the differences from the hand-contours of the well constrained deposit and eliminate the effects of extrapolations to great but unmeasured thicknesses near the vent, we removed the volume contribution of the near vent deposit (defined as the deposit above 3.5 m) from the volume estimates. The remaining volume approximates to 1.76 *106 m3 (geometric mean for all methods) with maximum and minimum estimates of 2.5 *106 m3 and 1.1 *106 m3. Different integration methods of identical isopach maps result in volume estimate differences of up to 50% and, on average, maximum variation between integration methods of 14%. Volume estimates with methods (A), (C) and (D) show strong correlation (r = 0.8 to r = 0.9), while correlation of (B) with the other methods is weaker (r = 0.2 to r = 0.6) and correlation between (B) and (C) is not statistically significant. We find that the choice of larger maximum contours leads to smaller volume estimates due to method (C), but larger estimates with the other methods. We do not find statistically significant correlation between volume estimations and participants experience level, number of chosen contour levels, nor smoothness of contours. Overall, application of the different methods to the same maps leads to similar mean volume estimates, but the different methods show different dependencies and varying spread of volume estimates. The results indicate that these key parameters are less critically dependent on the operator and their choices of contour values, intervals etc., and more sensitive to the selection of technique to integrate these data.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Estimating regional centile curves from mixed data sources and countries.
van Buuren, Stef; Hayes, Daniel J; Stasinopoulos, D Mikis; Rigby, Robert A; ter Kuile, Feiko O; Terlouw, Dianne J
2009-10-15
Regional or national growth distributions can provide vital information on the health status of populations. In most resource poor countries, however, the required anthropometric data from purpose-designed growth surveys are not readily available. We propose a practical method for estimating regional (multi-country) age-conditional weight distributions based on existing survey data from different countries. We developed a two-step method by which one is able to model data with widely different age ranges and sample sizes. The method produces references both at the country level and at the regional (multi-country) level. The first step models country-specific centile curves by Box-Cox t and Box-Cox power exponential distributions implemented in generalized additive model for location, scale and shape through a common model. Individual countries may vary in location and spread. The second step defines the regional reference from a finite mixture of the country distributions, weighted by population size. To demonstrate the method we fitted the weight-for-age distribution of 12 countries in South East Asia and the Western Pacific, based on 273 270 observations. We modeled both the raw body weight and the corresponding Z score, and obtained a good fit between the final models and the original data for both solutions. We briefly discuss an application of the generated regional references to obtain appropriate, region specific, age-based dosing regimens of drugs used in the tropics. The method is an affordable and efficient strategy to estimate regional growth distributions where the standard costly alternatives are not an option. Copyright (c) 2009 John Wiley & Sons, Ltd.
Separability of spatiotemporal spectra of image sequences. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Eckert, Michael P.; Buchsbaum, Gershon; Watson, Andrew B.
1992-01-01
The spatiotemporal power spectrum was calculated of 14 image sequences in order to determine the degree to which the spectra are separable in space and time, and to assess the validity of the commonly used exponential correlation model found in the literature. The spectrum was expanded by a Singular Value Decomposition into a sum of separable terms and an index was defined of spatiotemporal separability as the fraction of the signal energy that can be represented by the first (largest) separable term. All spectra were found to be highly separable with an index of separability above 0.98. The power spectra of the sequences were well fit by a separable model. The power spectrum model corresponds to a product of exponential autocorrelation functions separable in space and time.
A method to directly measure maximum volume of fish stomachs or digestive tracts
Burley, C.C.; Vigg, S.
1989-01-01
A new method for measuring maximum stomach or digestive tract volume of fish incorporates air injection at constant pressure with water displacement to measure directly the internal volume of a stomach or analogous structure. The method was tested with coho salmon, Oncorhynchus kisutch (Walbaum), which has a true stomach, and northern squawfish, Ptychocheilus oregonensis(Richardson), which has a modified foregut as a functional analogue. Both species were collected during July-October 1987 from the Columbia River, U.S.A. Relationships between fish weight (= volume) and maximum volume of the digestive organ were best fitted for coho salmon by an allometric model and for northern squawfish by an exponential model. Least squares regression analysis of individual measurements showed less variability in the volume of coho salmon stomachs (R2= 0.85) than in the total digestive tracts (R2= 0.55) and foreguts (R2= 0.61) of northern squawfish, relative to fish size. Compared to previous methods, the new technique has the advantage of accurately measuring the internal volume of a wide range of digestive organ shapes and sizes.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Lü, Chun-guang; Wang, Wei-he; Yang, Wen-bo; Tian, Qing-iju; Lu, Shan; Chen, Yun
2015-11-01
New hyperspectral sensor to detect total ozone is considered to be carried on geostationary orbit platform in the future, because local troposphere ozone pollution and diurnal variation of ozone receive more and more attention. Sensors carried on geostationary satellites frequently obtain images on the condition of larger observation angles so that it has higher requirements of total ozone retrieval on these observation geometries. TOMS V8 algorithm is developing and widely used in low orbit ozone detecting sensors, but it still lack of accuracy on big observation geometry, therefore, how to improve the accuracy of total ozone retrieval is still an urgent problem that demands immediate solution. Using moderate resolution atmospheric transmission, MODT-RAN, synthetic UV backscatter radiance in the spectra region from 305 to 360 nm is simulated, which refers to clear sky, multi angles (12 solar zenith angles and view zenith angles) and 26 standard profiles, moreover, the correlation and trends between atmospheric total ozone and backward scattering of the earth UV radiation are analyzed based on the result data. According to these result data, a new modified initial total ozone estimation model in TOMS V8 algorithm is considered to be constructed in order to improve the initial total ozone estimating accuracy on big observation geometries. The analysis results about total ozone and simulated UV backscatter radiance shows: Radiance in 317.5 nm (R₃₁₇.₅) decreased as the total ozone rise. Under the small solar zenith Angle (SZA) and the same total ozone, R₃₁₇.₅ decreased with the increase of view zenith Angle (VZA) but increased on the large SZA. Comparison of two fit models shows: without the condition that both SZA and VZA are large (> 80°), exponential fitting model and logarithm fitting model all show high fitting precision (R² > 0.90), and precision of the two decreased as the SZA and VZA rise. In most cases, the precision of logarithm fitting mode is about 0.9% higher than exponential fitting model. With the increasing of VZA or SZA, the fitting precision gradually lower, and the fall is more in the larger VZA or SZA. In addition, the precision of fitting mode exist a plateau in the small SZA range. The modified initial total ozone estimating model (ln(I) vs. Ω) is established based on logarithm fitting mode, and compared with traditional estimating model (I vs. ln(Ω)), that shows: the RMSE of ln(I) vs. Ω and I vs. ln(Ω) all have the down trend with the rise of total ozone. In the low region of total ozone (175-275 DU), the RMSE is obvious higher than high region (425-525 DU), moreover, a RMSE peak and a trough exist in 225 and 475 DU respectively. With the increase of VZA and SZA, the RMSE of two initial estimating models are overall rise, and the upraising degree is ln(I) vs. Ω obvious with the growing of SZA and VZA. The estimating result by modified model is better than traditional model on the whole total ozone range (RMSE is 0.087%-0.537% lower than traditional model), especially on lower total ozone region and large observation geometries. Traditional estimating model relies on the precision of exponential fitting model, and modified estimating model relies on the precision of logarithm fitting model. The improvement of the estimation accuracy by modified initial total ozone estimating model expand the application range of TOMS V8 algorithm. For sensor carried on geostationary orbit platform, there is no doubt that the modified estimating model can help improve the inversion accuracy on wide spatial and time range This modified model could give support and reference to TOMS algorithm update in the future.
The iTEaCH Implementation Model: Adopting a Best-Fit Approach to Implementing ICT in Schools
ERIC Educational Resources Information Center
Choy, Michael
2013-01-01
Schools have seen an exponential increase in the range of information communication technology (ICT) being utilised for learning and teaching over the past decade, especially with the advent of the internet. What is exciting is not just more technology, but that there are more types of technology which teachers can pick and choose from, based on…
Removing the tree-ring width biological trend using expected basal area increment
Franco Biondi; Fares Qeadan
2008-01-01
One of the main elements of dendrochronological standardization is the removal of the biological trend, i.e., the progressive decline of ring width along a cross-sectional radius that is mostly caused by the corresponding increase in stem diameter over time. A very common option for removing this biological trend is to fit a modified negative exponential curve to the...
USDA-ARS?s Scientific Manuscript database
In this study, effective spread of aeciospores from an area source in a field was fit to an exponential decline model with a predicted maximum distance of spread of 30 m from the area source to observed uredinia on one leaf of one C. arvense shoot. However, the greatest number of shoots bearing leav...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdo, A. A.; Abeysekara, U.; Linnemann, J. T.
2012-07-10
The Cygnus region is a very bright and complex portion of the TeV sky, host to unidentified sources and a diffuse excess with respect to conventional cosmic-ray propagation models. Two of the brightest TeV sources, MGRO J2019+37 and MGRO J2031+41, are analyzed using Milagro data with a new technique, and their emission is tested under two different spectral assumptions: a power law and a power law with an exponential cutoff. The new analysis technique is based on an energy estimator that uses the fraction of photomultiplier tubes in the observatory that detect the extensive air shower. The photon spectrum ismore » measured in the range 1-100 TeV using the last three years of Milagro data (2005-2008), with the detector in its final configuration. An F-test indicates that MGRO J2019+37 is better fit by a power law with an exponential cutoff than by a simple power law. The best-fitting parameters for the power law with exponential cutoff model are a normalization at 10 TeV of 7{sup +5}{sub -2} Multiplication-Sign 10{sup -10} s{sup -1} m{sup -2} TeV{sup -1}, a spectral index of 2.0{sup +0.5}{sub -1.0}, and a cutoff energy of 29{sup +50}{sub -16} TeV. MGRO J2031+41 shows no evidence of a cutoff. The best-fitting parameters for a power law are a normalization of 2.1{sup +0.6}{sub -0.6} Multiplication-Sign 10{sup -10} s{sup -1} m{sup -2} TeV{sup -1} and a spectral index of 3.22{sup +0.23}{sub -0.18}. The overall flux is subject to a {approx}30% systematic uncertainty. The systematic uncertainty on the power-law indices is {approx}0.1. Both uncertainties have been verified with cosmic-ray data. A comparison with previous results from TeV J2032+4130, MGRO J2031+41, and MGRO J2019+37 is also presented.« less
NASA Technical Reports Server (NTRS)
Abdo, A. A.; Abeysekara, U.; Allen, B, T.; Aune, T.; Berley, D.; Bonamente, E.; Christopher, G. E.; DeYoung, T.; Dingus, B. L.; Ellsworth, R. W.;
2012-01-01
The Cygnus region is a very bright and complex portion of the TeV sky, host to unidentified sources and a diffuse excess with respect to conventional cosmic-ray propagation models. Two of the brightest TeV sources, MGRO J2019+37 and MGRO J2031+41, are analyzed using Milagro data with a new technique, and their emission is tested under two different spectral assumptions: a power law and a power law with an exponential cutoff. The new analysis technique is based on an energy estimator that uses the fraction of photomultiplier tubes in the observatory that detect the extensive air shower. The photon spectrum is measured in the range 1-100 TeV using the last three years of Milagro data (2005-2008), with the detector in its final configuration. An F-test indicates that MGRO J2019+37 is better fit by a power law with an exponential cutoff than by a simple power law. The best-fitting parameters for the power law with exponential cutoff model are a normalization at 10 TeV of 7(sup +5 sub -2) × 10(exp -10)/ s /sq m/ TeV, a spectral index of 2.0(sup +0.5 sub -10), and a cutoff energy of 29(sup +50 sub -16) TeV. MGRO J2031+41 shows no evidence of a cutoff. The best-fitting parameters for a power law are a normalization of 2.1(sup +0.6 sub -0.6) × 10(exp -10)/ s/sq m/ TeV and a spectral index of 3.22(sup +0.23 sub -0.18. The overall flux is subject to a approx.. 30% systematic uncertainty. The systematic uncertainty on the power-law indices is approx. 0.1. Both uncertainties have been verified with cosmic-ray data. A comparison with previous results from TeV J2032+4130, MGRO J2031+41, and MGRO J2019+37 is also presented.
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.
exponential finite difference technique for solving partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Handschuh, R.F.
1987-01-01
An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less
Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain
2012-01-01
Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053
Method for exponentiating in cryptographic systems
Brickell, Ernest F.; Gordon, Daniel M.; McCurley, Kevin S.
1994-01-01
An improved cryptographic method utilizing exponentiation is provided which has the advantage of reducing the number of multiplications required to determine the legitimacy of a message or user. The basic method comprises the steps of selecting a key from a preapproved group of integer keys g; exponentiating the key by an integer value e, where e represents a digital signature, to generate a value g.sup.e ; transmitting the value g.sup.e to a remote facility by a communications network; receiving the value g.sup.e at the remote facility; and verifying the digital signature as originating from the legitimate user. The exponentiating step comprises the steps of initializing a plurality of memory locations with a plurality of values g.sup.xi ; computi The United States Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the Department of Energy and AT&T Company.
LBGs properties from z˜3 to z˜6
NASA Astrophysics Data System (ADS)
de Barros, S.; Schaerer, D.; Stark, D. P.
2011-12-01
We analyse the spectral energy distribution (SED) of U, B, V and i-dropout samples from GOODS-MUSIC and we determine their physical properties, such as stellar age and mass, dust attenuation and star formation rate (SFR). Furthermore, we examine how the strength of Lyα emission can be constrained from broad-band SED fits instead of relying in spectroscopy. We use our SED fitting tool including the effects of nebular emission and we explore different star formation histories (SFHs). We find that SEDs are statistically better fitted with nebular emission and exponentially decreasing star formation. Considering this result, stellar mass and star formation rate (SFR) estimations modify the specific SFR (SFR/M_{⋆}) - redshift relation, in compared to previous studies. Finally, our inferred Lyα properties are in good agreement with the available spectroscopic observations.
Exponential stability of stochastic complex networks with multi-weights based on graph theory
NASA Astrophysics Data System (ADS)
Zhang, Chunmei; Chen, Tianrui
2018-04-01
In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.
betaFIT: A computer program to fit pointwise potentials to selected analytic functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Pashov, Asen
2017-01-01
This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.
Complex growing networks with intrinsic vertex fitness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedogne, C.; Rodgers, G. J.
2006-10-15
One of the major questions in complex network research is to identify the range of mechanisms by which a complex network can self organize into a scale-free state. In this paper we investigate the interplay between a fitness linking mechanism and both random and preferential attachment. In our models, each vertex is assigned a fitness x, drawn from a probability distribution {rho}(x). In Model A, at each time step a vertex is added and joined to an existing vertex, selected at random, with probability p and an edge is introduced between vertices with fitnesses x and y, with a ratemore » f(x,y), with probability 1-p. Model B differs from Model A in that, with probability p, edges are added with preferential attachment rather than randomly. The analysis of Model A shows that, for every fixed fitness x, the network's degree distribution decays exponentially. In Model B we recover instead a power-law degree distribution whose exponent depends only on p, and we show how this result can be generalized. The properties of a number of particular networks are examined.« less
Contamination of current-clamp measurement of neuron capacitance by voltage-dependent phenomena
White, William E.
2013-01-01
Measuring neuron capacitance is important for morphological description, conductance characterization, and neuron modeling. One method to estimate capacitance is to inject current pulses into a neuron and fit the resulting changes in membrane potential with multiple exponentials; if the neuron is purely passive, the amplitude and time constant of the slowest exponential give neuron capacitance (Major G, Evans JD, Jack JJ. Biophys J 65: 423–449, 1993). Golowasch et al. (Golowasch J, Thomas G, Taylor AL, Patel A, Pineda A, Khalil C, Nadim F. J Neurophysiol 102: 2161–2175, 2009) have shown that this is the best method for measuring the capacitance of nonisopotential (i.e., most) neurons. However, prior work has not tested for, or examined how much error would be introduced by, slow voltage-dependent phenomena possibly present at the membrane potentials typically used in such work. We investigated this issue in lobster (Panulirus interruptus) stomatogastric neurons by performing current clamp-based capacitance measurements at multiple membrane potentials. A slow, voltage-dependent phenomenon consistent with residual voltage-dependent conductances was present at all tested membrane potentials (−95 to −35 mV). This phenomenon was the slowest component of the neuron's voltage response, and failure to recognize and exclude it would lead to capacitance overestimates of several hundredfold. Most methods of estimating capacitance depend on the absence of voltage-dependent phenomena. Our demonstration that such phenomena make nonnegligible contributions to neuron responses even at well-hyperpolarized membrane potentials highlights the critical importance of checking for such phenomena in all work measuring neuron capacitance. We show here how to identify such phenomena and minimize their contaminating influence. PMID:23576698
State-space forecasting of Schistosoma haematobium time-series in Niono, Mali.
Medina, Daniel C; Findley, Sally E; Doumbia, Seydou
2008-08-13
Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.-which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively-is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. In this longitudinal retrospective (01/1996-06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state-space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. The exponential smoothing state-space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium-induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel.
Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions
NASA Astrophysics Data System (ADS)
Peacock, Sheila; Douglas, Alan; Bowers, David
2017-08-01
Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.
State–Space Forecasting of Schistosoma haematobium Time-Series in Niono, Mali
Medina, Daniel C.; Findley, Sally E.; Doumbia, Seydou
2008-01-01
Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.—which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively—is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state–space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. Conclusions/Significance The exponential smoothing state–space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium–induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel. PMID:18698361
An advanced method to assess the diet of free-ranging large carnivores based on scats.
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert
2012-01-01
The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.
An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats
Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert
2012-01-01
Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373
Central Pb+Pb collisions at 158 A GeV/c studied by $$\\pi^-\\pi^-$$ interferometry
Aggarwal et al., M. M.
2000-05-18
Two-particle correlations have been measured for identifiedmore » $$\\pi^-$$ from central 158 A GeV Pb+Pb collisions and fitted radii of about 7 fm in all dimensions have been obtained. A multi-dimensional study of the radii as a function of k T is presented, including a full correction for the resolution effects of the apparatus. The cross term R 2 out-long of the standard fit in the Longitudinally CoMoving System (LCMS) and the v L parameter of the generalised Yano-Koonin fit are compatible with o, suggesting that the source undergoes a boost invariant expansion. The shapes of the correlation functions in Q inv and Q space = √Q$$2\\atop{x}$$ + Q$$2\\atop{y}$$ + Q$$2\\atop{z}$$ have been analyzed in detail. They are not Gaussian but better represented by exponentials. As a consequence fitting Gaussians to these correlation functions may produce different radii depending on the acceptance of the experimental setup used for the measurement.« less
Knies, Jennifer L.; Kingsolver, Joel G.
2013-01-01
The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reaction(s) over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using datasets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range, and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics is rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence. PMID:20528477
Knies, Jennifer L; Kingsolver, Joel G
2010-08-01
The initial rise of fitness that occurs with increasing temperature is attributed to Arrhenius kinetics, in which rates of reaction increase exponentially with increasing temperature. Models based on Arrhenius typically assume single rate-limiting reactions over some physiological temperature range for which all the rate-limiting enzymes are in 100% active conformation. We test this assumption using data sets for microbes that have measurements of fitness (intrinsic rate of population growth) at many temperatures and over a broad temperature range and for diverse ectotherms that have measurements at fewer temperatures. When measurements are available at many temperatures, strictly Arrhenius kinetics are rejected over the physiological temperature range. However, over a narrower temperature range, we cannot reject strictly Arrhenius kinetics. The temperature range also affects estimates of the temperature dependence of fitness. These results indicate that Arrhenius kinetics only apply over a narrow range of temperatures for ectotherms, complicating attempts to identify general patterns of temperature dependence.
Calorie counting and fitness tracking technology: Associations with eating disorder symptomatology.
Simpson, Courtney C; Mazzeo, Suzanne E
2017-08-01
The use of online calorie tracking applications and activity monitors is increasing exponentially. Anecdotal reports document the potential for these trackers to trigger, maintain, or exacerbate eating disorder symptomatology. Yet, research has not examined the relation between use of these devices and eating disorder-related attitudes and behaviors. This study explored associations between the use of calorie counting and fitness tracking devices and eating disorder symptomatology. Participants (N=493) were college students who reported their use of tracking technology and completed measures of eating disorder symptomatology. Individuals who reported using calorie trackers manifested higher levels of eating concern and dietary restraint, controlling for BMI. Additionally, fitness tracking was uniquely associated with ED symptomatology after adjusting for gender and bingeing and purging behavior within the past month. Findings highlight associations between use of calorie and fitness trackers and eating disorder symptomatology. Although preliminary, overall results suggest that for some individuals, these devices might do more harm than good. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-24
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.
Shot model parameters for Cygnus X-1 through phase portrait fitting
NASA Technical Reports Server (NTRS)
Lochner, James C.; Swank, J. H.; Szymkowiak, A. E.
1991-01-01
Shot models for systems having about 1/f power density spectrum are developed by utilizing a distribution of shot durations. Parameters of the distribution are determined by fitting the power spectrum either with analytic forms for the spectrum of a shot model with a given shot profile, or with the spectrum derived from numerical realizations of trial shot models. The shot fraction is specified by fitting the phase portrait, which is a plot of intensity at a given time versus intensity at a delayed time and in principle is sensitive to different shot profiles. These techniques have been extensively applied to the X-ray variability of Cygnus X-1, using HEAO 1 A-2 and an Exosat ME observation. The power spectra suggest models having characteristic shot durations lasting from milliseconds to a few seconds, while the phase portrait fits give shot fractions of about 50 percent. Best fits to the portraits are obtained if the amplitude of the shot is a power-law function of the duration of the shot. These fits prefer shots having a symmetric exponential rise and decay. Results are interpreted in terms of a distribution of magnetic flares in the accretion disk.
On the duration and intensity of cumulative advantage competitions
NASA Astrophysics Data System (ADS)
Jiang, Bo; Sun, Liyuan; Figueiredo, Daniel R.; Ribeiro, Bruno; Towsley, Don
2015-11-01
Network growth can be framed as a competition for edges among nodes in the network. As with various other social and physical systems, skill (fitness) and luck (random chance) act as fundamental forces driving competition dynamics. In the context of networks, cumulative advantage (CA)—the rich-get-richer effect—is seen as a driving principle governing the edge accumulation process. However, competitions coupled with CA exhibit non-trivial behavior and little is formally known about duration and intensity of CA competitions. By isolating two nodes in an ideal CA competition, we provide a mathematical understanding of how CA exacerbates the role of luck in detriment of skill. We show, for instance, that when nodes start with few edges, an early stroke of luck can place the less skilled in the lead for an extremely long period of time, a phenomenon we call ‘struggle of the fittest’. We prove that duration of a simple skill and luck competition model exhibit power-law tails when CA is present, regardless of skill difference, which is in sharp contrast to the exponential tails when fitness is distinct but CA is absent. We also prove that competition intensity is always upper bounded by an exponential tail, irrespective of CA and skills. Thus, CA competitions can be extremely long (infinite mean, depending on fitness ratio) but almost never very intense. The theoretical results are corroborated by extensive numerical simulations. Our findings have important implications to competitions not only among nodes in networks but also in contexts that leverage socio-physical models embodying CA competitions.
Direct Simulation of Magnetic Resonance Relaxation Rates and Line Shapes from Molecular Trajectories
Rangel, David P.; Baveye, Philippe C.; Robinson, Bruce H.
2012-01-01
We simulate spin relaxation processes, which may be measured by either continuous wave or pulsed magnetic resonance techniques, using trajectory-based simulation methodologies. The spin–lattice relaxation rates are extracted numerically from the relaxation simulations. The rates obtained from the numerical fitting of the relaxation curves are compared to those obtained by direct simulation from the relaxation Bloch–Wangsness–Abragam– Redfield theory (BWART). We have restricted our study to anisotropic rigid-body rotational processes, and to the chemical shift anisotropy (CSA) and a single spin–spin dipolar (END) coupling mechanisms. Examples using electron paramagnetic resonance (EPR) nitroxide and nuclear magnetic resonance (NMR) deuterium quadrupolar systems are provided. The objective is to compare those rates obtained by numerical simulations with the rates obtained by BWART. There is excellent agreement between the simulated and BWART rates for a Hamiltonian describing a single spin (an electron) interacting with the bath through the chemical shift anisotropy (CSA) mechanism undergoing anisotropic rotational diffusion. In contrast, when the Hamiltonian contains both the chemical shift anisotropy (CSA) and the spin–spin dipolar (END) mechanisms, the decay rate of a single exponential fit of the simulated spin–lattice relaxation rate is up to a factor of 0.2 smaller than that predicted by BWART. When the relaxation curves are fit to a double exponential, the slow and fast rates extracted from the decay curves bound the BWART prediction. An extended BWART theory, in the literature, includes the need for multiple relaxation rates and indicates that the multiexponential decay is due to the combined effects of direct and cross-relaxation mechanisms. PMID:22540276
High-frequency measurements of aeolian saltation flux: Field-based methodology and applications
NASA Astrophysics Data System (ADS)
Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.
2018-02-01
Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.
NASA Astrophysics Data System (ADS)
Dalla Bontà, E.; Davies, R. L.; Houghton, R. C. W.; D'Eugenio, F.; Méndez-Abreu, J.
2018-02-01
We present a photometric analysis of 65 galaxies in the rich cluster Abell 1689 at z = 0.183, using the Hubble Space Telescope Advanced Camera for Surveys archive images in the rest-frame V band. We perform two-dimensional multicomponent photometric decomposition of each galaxy adopting different models of the surface-brightness distribution. We present an accurate morphological classification for each of the sample galaxies. For 50 early-type galaxies, we fit both a de Vaucouleurs law and a Sérsic law; S0s are modelled by also including a disc component described by an exponential law. Bars of SB0s are described by the profile of a Ferrers ellipsoid. For the 15 spirals, we model a Sérsic bulge, exponential disc and, when required, a Ferrers bar component. We derive the Fundamental Plane (FP) by fitting 40 early-type galaxies in the sample, using different surface-brightness distributions. We find that the tightest plane is that derived by Sérsic bulges. We find that bulges of spirals lie on the same relation. The FP is better defined by the bulges alone rather than the entire galaxies. Comparison with local samples shows both an offset and rotation in the FP of Abell 1689.
Individual and group dynamics in purchasing activity
NASA Astrophysics Data System (ADS)
Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao
2013-01-01
As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
Cai, Li
2015-06-01
Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.
An evolutionary strategy based on partial imitation for solving optimization problems
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2016-12-01
In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define 'partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted 'partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.
The nonequilibrium quantum many-body problem as a paradigm for extreme data science
NASA Astrophysics Data System (ADS)
Freericks, J. K.; Nikolić, B. K.; Frieder, O.
2014-12-01
Generating big data pervades much of physics. But some problems, which we call extreme data problems, are too large to be treated within big data science. The nonequilibrium quantum many-body problem on a lattice is just such a problem, where the Hilbert space grows exponentially with system size and rapidly becomes too large to fit on any computer (and can be effectively thought of as an infinite-sized data set). Nevertheless, much progress has been made with computational methods on this problem, which serve as a paradigm for how one can approach and attack extreme data problems. In addition, viewing these physics problems from a computer-science perspective leads to new approaches that can be tried to solve more accurately and for longer times. We review a number of these different ideas here.
Learning the Relationship between Galaxy Spectra and Star Formation Histories
NASA Astrophysics Data System (ADS)
Lovell, Christopher; Acquaviva, Viviana; Iyer, Kartheik; Gawiser, Eric
2018-01-01
We explore novel approaches to the problem of predicting a galaxy’s star formation history (SFH) from its Spectral Energy Distribution (SED). Traditional approaches to SED template fitting use constant or exponentially declining SFHs, and are known to incur significant bias in the inferred SFHs, which are typically skewed toward younger stellar populations. Machine learning approaches, including tree ensemble methods and convolutional neural networks, would not be affected by the same bias, and may work well in recovering unbiased and multi-episodic star formation histories. We use a supervised approach whereby models are trained using synthetic spectra, generated from three state of the art hydrodynamical simulations, including nebular emission. We explore how SED feature maps can be used to highlight areas of the spectrum with the highest predictive power and discuss the limitations of the approach when applied to real data.
Zhai, Peng-Wang; Hu, Yongxiang; Trepte, Charles R; Lucker, Patricia L
2009-02-16
A vector radiative transfer model has been developed for coupled atmosphere and ocean systems based on the Successive Order of Scattering (SOS) Method. The emphasis of this study is to make the model easy-to-use and computationally efficient. This model provides the full Stokes vector at arbitrary locations which can be conveniently specified by users. The model is capable of tracking and labeling different sources of the photons that are measured, e.g. water leaving radiances and reflected sky lights. This model also has the capability to separate florescence from multi-scattered sunlight. The delta - fit technique has been adopted to reduce computational time associated with the strongly forward-peaked scattering phase matrices. The exponential - linear approximation has been used to reduce the number of discretized vertical layers while maintaining the accuracy. This model is developed to serve the remote sensing community in harvesting physical parameters from multi-platform, multi-sensor measurements that target different components of the atmosphere-oceanic system.
NASA Astrophysics Data System (ADS)
Someya, Satoshi; Li, Yanrong; Ishii, Keiko; Okamoto, Koji
2011-01-01
This paper proposes a combined method for two-dimensional temperature and velocity measurements in liquid and gas flows using temperature-sensitive particles (TSPs), a pulsed ultraviolet laser, and a high-speed camera. TSPs respond to temperature changes in the flow and can also serve as tracers for the velocity field. The luminescence from the TSPs was recorded at 15,000 frames per second as sequential images for a lifetime-based temperature analysis. These images were also used for the particle image velocimetry calculations. The temperature field was estimated using several images, based on the lifetime method. The decay curves for various temperature conditions fit well to exponential functions, and from these the decay constants at each temperature were obtained. The proposed technique was applied to measure the temperature and velocity fields in natural convection driven by a Marangoni force and buoyancy in a rectangular tank. The accuracy of the temperature measurement of the proposed technique was ±0.35-0.40°C.
NASA Astrophysics Data System (ADS)
Chen, Zhongjing; Zhang, Xing; Pu, Yudong; Yan, Ji; Huang, Tianxuan; Jiang, Wei; Yu, Bo; Chen, Bolun; Tang, Qi; Song, Zifeng; Chen, Jiabin; Zhan, Xiayu; Liu, Zhongjie; Xie, Xufei; Jiang, Shaoen; Liu, Shenye
2018-02-01
The accuracy of the determination of the burn-averaged ion temperature of inertial confinement fusion implosions depends on the unfold process, including deconvolution and convolution methods, and the function, i.e., the detector response, used to fit the signals measured by neutron time-of-flight (nToF) detectors. The function given by Murphy et al. [Rev. Sci. Instrum. 68(1), 610-613 (1997)] has been widely used in Nova, Omega, and NIF. There are two components, i.e., fast and slow, and the contribution of scattered neutrons has not been dedicatedly considered. In this work, a new function, based on Murphy's function has been employed to unfold nToF signals. The contribution of scattered neutrons is easily included by the convolution of a Gaussian response function and an exponential decay. The ion temperature is measured by nToF with the new function. Good agreement with the ion temperature determined by the deconvolution method has been achieved.
Yang, Shuailing; Liu, Xuye; Jin, Yan; Li, Xingfang; Chen, Feng; Zhang, Mingdi; Lin, Songyi
2016-03-16
Water absorbed into the bulk amorphous structure of peptides can have profound effects on their properties. Here, we elucidated water dynamics in Asp-His-Thr-Lys-Glu (DHTKE), an antioxidant peptide derived from egg white ovalbumin, using water dynamic vapor sorption (DVS) and low-field nuclear magnetic resonance (LF-NMR). The DVS results indicated that parallel exponential kinetics model fitted well to the data of sorption kinetics behavior of DHTKE. Four different proton fractions with different mobilities were identified based on the degree of interaction between peptide and water. The water could significantly change the proton distribution and structure of the sample. The different phases of moisture absorption were reflected in the T2 parameters. In addition, the combined water content was dominant in the hygroscopicity of DHTKE. This study provides an effective real-time monitoring method for water mobility and distribution in synthetic peptides, and this method may have applications in promoting peptide quality assurance.
Calibration of the Minolta SPAD-502 leaf chlorophyll meter.
Markwell, J; Osterman, J C; Mitchell, J L
1995-01-01
Use of leaf meters to provide an instantaneous assessment of leaf chlorophyll has become common, but calibration of meter output into direct units of leaf chlorophyll concentration has been difficult and an understanding of the relationship between these two parameters has remained elusive. We examined the correlation of soybean (Glycine max) and maize (Zea mays L.) leaf chlorophyll concentration, as measured by organic extraction and spectrophotometric analysis, with output (M) of the Minolta SPAD-502 leaf chlorophyll meter. The relationship is non-linear and can be described by the equation chlorophyll (μmol m(-2))=10((M0.265)), r (2)=0.94. Use of such an exponential equation is theoretically justified and forces a more appropriate fit to a limited data set than polynomial equations. The exact relationship will vary from meter to meter, but will be similar and can be readily determined by empirical methods. The ability to rapidly determine leaf chlorophyll concentrations by use of the calibration method reported herein should be useful in studies on photosynthesis and crop physiology.
Half-life determination for {sup 108}Ag and {sup 110}Ag
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahn, Guilherme S.; Genezini, Frederico A.
2014-11-11
In this work, the half-life of the short-lived silver radionuclides {sup 108}Ag and {sup 110}Ag were measured by following the activity of samples after they were irradiated in the IEA-R1 reactor. The results were then fitted using a non-paralizable dead time correction to the regular exponential decay and the individual half-life values obtained were then analyzed using both the Normalized Residuals and the Rajeval techniques, in order to reach the most exact and precise final values. To check the validity of dead-time correction, a second correction method was also employed by means of counting a long-lived {sup 60}Co radioactive sourcemore » together with the samples as a livetime chronometer. The final half-live values obtained using both dead-time correction methods were in good agreement, showing that the correction was properly assessed. The results obtained are partially compatible with the literature values, but with a lower uncertainty, and allow a discussion on the last ENSDF compilations' values.« less
Mathematical modeling of drying of pretreated and untreated pumpkin.
Tunde-Akintunde, T Y; Ogunlakin, G O
2013-08-01
In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.
Tang, Hong; Ruan, Chengjie; Qiu, Tianshuang; Park, Yongwan; Xiao, Shouzhong
2013-08-01
The relationships between the amplitude of the first heart sound (S1) and the rising rate of left ventricular pressure (LVP) concluded in previous studies were not consistent. Some researchers believed the relationship was positively linear; others stated the relationship was only positively correlated. To further investigate this relationship, this study simultaneously sampled the external phonocardiogram, electrocardiogram, and intracardiac pressure in the left ventricle in three anesthetized dogs, while invoking wide hemodynamic changes using various doses of epinephrine. The relationship between the maximum amplitude of S1 and the maximum rising rate of LVP and the relationship between the amplitude of dominant peaks/valleys and the corresponding rising rate of LVP were examined by linear, quadratic, cubic, and exponential models. The results showed that the relationships are best fit by nonlinear exponential models.
Rainbow net analysis of VAXcluster system availability
NASA Technical Reports Server (NTRS)
Johnson, Allen M., Jr.; Schoenfelder, Michael A.
1991-01-01
A system modeling technique, Rainbow Nets, is used to evaluate the availability and mean-time-to-interrupt of the VAXcluster. These results are compared to the exact analytic results showing that reasonable accuracy is achieved through simulation. The complexity of the Rainbow Net does not increase as the number of processors increases, but remains constant, unlike a Markov model which expands exponentially. The constancy is achieved by using tokens with identity attributes (items) that can have additional attributes associated with them (features) which can exist in multiple states. The time to perform the simulation increases, but this is a polynomial increase rather than exponential. There is no restriction on distributions used for transition firing times, allowing real situations to be modeled more accurately by choosing the distribution which best fits the system performance and eliminating the need for simplifying assumptions.
The Analysis of Fluorescence Decay by a Method of Moments
Isenberg, Irvin; Dyson, Robert D.
1969-01-01
The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139
NASA Astrophysics Data System (ADS)
Mu, G. Y.; Mi, X. Z.; Wang, F.
2018-01-01
The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.
NASA Astrophysics Data System (ADS)
Korkiakoski, Mika; Tuovinen, Juha-Pekka; Aurela, Mika; Koskinen, Markku; Minkkinen, Kari; Ojanen, Paavo; Penttilä, Timo; Rainne, Juuso; Laurila, Tuomas; Lohila, Annalea
2017-04-01
We measured methane (CH4) exchange rates with automatic chambers at the forest floor of a nutrient-rich drained peatland in 2011-2013. The fen, located in southern Finland, was drained for forestry in 1969 and the tree stand is now a mixture of Scots pine, Norway spruce, and pubescent birch. Our measurement system consisted of six transparent chambers and stainless steel frames, positioned on a number of different field and moss layer compositions. Gas concentrations were measured with an online cavity ring-down spectroscopy gas analyzer. Fluxes were calculated with both linear and exponential regression. The use of linear regression resulted in systematically smaller CH4 fluxes by 10-45 % as compared to exponential regression. However, the use of exponential regression with small fluxes ( < 2.5 µg CH4 m-2 h-1) typically resulted in anomalously large absolute fluxes and high hour-to-hour deviations. Therefore, we recommend that fluxes are initially calculated with linear regression to determine the threshold for low
fluxes and that higher fluxes are then recalculated using exponential regression. The exponential flux was clearly affected by the length of the fitting period when this period was < 190 s, but stabilized with longer periods. Thus, we also recommend the use of a fitting period of several minutes to stabilize the results and decrease the flux detection limit. There were clear seasonal dynamics in the CH4 flux: the forest floor acted as a CH4 sink particularly from early summer until the end of the year, while in late winter the flux was very small and fluctuated around zero. However, the magnitude of fluxes was relatively small throughout the year, ranging mainly from -130 to +100 µg CH4 m-2 h-1. CH4 emission peaks were observed occasionally, mostly in summer during heavy rainfall events. Diurnal variation, showing a lower CH4 uptake rate during the daytime, was observed in all of the chambers, mainly in the summer and late spring, particularly in dry conditions. It was attributed more to changes in wind speed than air or soil temperature, which suggest that physical rather than biological phenomena are responsible for the observed variation. The annual net CH4 exchange varied from -104 ± 30 to -505 ± 39 mg CH4 m-2 yr-1 among the six chambers, with an average of -219 mg CH4 m-2 yr-1 over the 2-year measurement period.
Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian
2012-08-01
We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.
Compact exponential product formulas and operator functional derivative
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.
NASA Astrophysics Data System (ADS)
Al Mashwood, Abdullah; Predoi-Cross, Adriana; Devi, V. Malathy; Rozario, Hoimonti; Billinghurst, Brant
2018-06-01
Pure CO2 spectra recorded at room temperature and different pressures (0.2-140 Torr) have been analyzed with the help of a fitting routine that takes into account asymmetries arising in the spectral lines due to pressure induced effects such as line mixing. The fitting procedure used in this study allows one to adjust the ro-vibrational constants for the band rather than fitting for individual line parameters. These constrained parameters greatly reduce the measurement uncertainties and allow us to observe the behavior of the weak lines corresponding to high J quantum numbers. We have also calculated line mixing parameters using approximations based on exponential nature of the energy difference between ground and upper vibrational states involved in the ro-vibrational band transitions. The calculated results show good agreement when compared with the experimentally determined parameters.
Semenov, Mikhail A; Terkel, Dmitri A
2003-01-01
This paper analyses the convergence of evolutionary algorithms using a technique which is based on a stochastic Lyapunov function and developed within the martingale theory. This technique is used to investigate the convergence of a simple evolutionary algorithm with self-adaptation, which contains two types of parameters: fitness parameters, belonging to the domain of the objective function; and control parameters, responsible for the variation of fitness parameters. Although both parameters mutate randomly and independently, they converge to the "optimum" due to the direct (for fitness parameters) and indirect (for control parameters) selection. We show that the convergence velocity of the evolutionary algorithm with self-adaptation is asymptotically exponential, similar to the velocity of the optimal deterministic algorithm on the class of unimodal functions. Although some martingale inequalities have not be proved analytically, they have been numerically validated with 0.999 confidence using Monte-Carlo simulations.
Extraction of t slopes from experimental γ p → K + Λ and γ p → K + Σ 0 cross section data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freese, Adam; Puentes, Daniel; Adhikari, Shankar
We analyze recent K + meson photoproduction data from the CLAS collaboration for the reactions γp → K +Λ and γp → K +Σ 0 , fitting measured forward-angle differential cross sections to the form AeBt. We develop a quantitative scheme for determining the kinematic region where the fit is to be done, and, from the extracted t-slope B, determine whether single-Reggeon exchange can explain the production mechanism. We find that, in the region 5 < s < 8.1 GeV2 , production of the K +Λ channel can be explained by single K + Reggeon exchange, but the K +Σmore » 0 production channel cannot. We verify these conclusions by fitting the data to a differential cross section produced by the interfering sum of two exponential amplitudes.« less
Extraction of t slopes from experimental γ p →K+Λ and γ p →K+Σ0 cross section data
NASA Astrophysics Data System (ADS)
Freese, Adam; Puentes, Daniel; Adhikari, Shankar; Badui, Rafael; Guo, Lei; Raue, Brian
2017-10-01
We analyze recent K+ meson photoproduction data from the CLAS collaboration for the reactions γ p →K+Λ and γ p →K+Σ0 , fitting measured forward-angle differential cross sections to the form A eB t . We develop a quantitative scheme for determining the kinematic region where the fit is to be done, and, from the extracted t -slope B , determine whether single-Reggeon exchange can explain the production mechanism. We find that, in the region 5
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue
Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.
2004-01-01
The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H; BC Cancer Agency, Surrey, B.C.; BC Cancer Agency, Vancouver, B.C.
Purpose: The Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC 2010) survey of radiation dose-volume effects on salivary gland function has called for improved understanding of intragland dose sensitivity and the effectiveness of partial sparing in salivary glands. Regional dose susceptibility of sagittally- and coronally-sub-segmented parotid gland has been studied. Specifically, we examine whether individual consideration of sub-segments leads to improved prediction of xerostomia compared with whole parotid mean dose. Methods: Data from 102 patients treated for head-and-neck cancers at the BC Cancer Agency were used in this study. Whole mouth stimulated saliva was collected before (baseline), threemore » months, and one year after cessation of radiotherapy. Organ volumes were contoured using treatment planning CT images and sub-segmented into regional portions. Both non-parametric (local regression) and parametric (mean dose exponential fitting) methods were employed. A bootstrap technique was used for reliability estimation and cross-comparison. Results: Salivary loss is described well using non-parametric and mean dose models. Parametric fits suggest a significant distinction in dose response between medial-lateral and anterior-posterior aspects of the parotid (p<0.01). Least-squares and least-median squares estimates differ significantly (p<0.00001), indicating fits may be skewed by noise or outliers. Salivary recovery exhibits a weakly arched dose response: the highest recovery is seen at intermediate doses. Conclusions: Salivary function loss is strongly dose dependent. In contrast no useful dose dependence was observed for function recovery. Regional dose dependence was observed, but may have resulted from a bias in dose distributions.« less
Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte
2008-04-01
The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.
Maji, Kaushik; Kouri, Donald J
2011-03-28
We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.
Zheng, Li; Silliman, Stephen E.
2000-01-01
A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first‐order solution is developed for the case of a transmissivity field which is isotropic and whose second‐order behavior can be characterized by an exponential covariance structure. The estimates of the variance σY2 and the integral scale λ of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two‐dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (σY2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both λ and σY2 in aquifers with the value of σY2 up to 2.0, but the errors in those estimates were higher for σY2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.
Broadband Spectral Modeling of the Extreme Gigahertz-peaked Spectrum Radio Source PKS B0008-421
NASA Astrophysics Data System (ADS)
Callingham, J. R.; Gaensler, B. M.; Ekers, R. D.; Tingay, S. J.; Wayth, R. B.; Morgan, J.; Bernardi, G.; Bell, M. E.; Bhat, R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kudrayvtseva, N.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Srivani, K. S.; Subrahmanyan, R.; Udaya Shankar, N.; Webster, R. L.; Williams, A.; Williams, C. L.
2015-08-01
We present broadband observations and spectral modeling of PKS B0008-421 and identify it as an extreme gigahertz-peaked spectrum (GPS) source. PKS B0008-421 is characterized by the steepest known spectral slope below the turnover, close to the theoretical limit of synchrotron self-absorption, and the smallest known spectral width of any GPS source. Spectral coverage of the source spans from 0.118 to 22 GHz, which includes data from the Murchison Widefield Array and the wide bandpass receivers on the Australia Telescope Compact Array. We have implemented a Bayesian inference model fitting routine to fit the data with internal free-free absorption (FFA), single- and double-component FFA in an external homogeneous medium, FFA in an external inhomogeneous medium, or single- and double-component synchrotron self-absorption models, all with and without a high-frequency exponential break. We find that without the inclusion of a high-frequency break these models cannot accurately fit the data, with significant deviations above and below the peak in the radio spectrum. The addition of a high-frequency break provides acceptable spectral fits for the inhomogeneous FFA and double-component synchrotron self-absorption models, with the inhomogeneous FFA model statistically favored. The requirement of a high-frequency spectral break implies that the source has ceased injecting fresh particles. Additional support for the inhomogeneous FFA model as being responsible for the turnover in the spectrum is given by the consistency between the physical parameters derived from the model fit and the implications of the exponential spectral break, such as the necessity of the source being surrounded by a dense ambient medium to maintain the peak frequency near the gigahertz region. This implies that PKS B0008-421 should display an internal H i column density greater than 1020 cm-2. The discovery of PKS B0008-421 suggests that the next generation of low radio frequency surveys could reveal a large population of GPS sources that have ceased activity, and that a portion of the ultra-steep-spectrum source population could be composed of these GPS sources in a relic phase.
Rate laws of the self-induced aggregation kinetics of Brownian particles
NASA Astrophysics Data System (ADS)
Mondal, Shrabani; Sen, Monoj Kumar; Baura, Alendu; Bag, Bidhan Chandra
2016-03-01
In this paper we have studied the self induced aggregation kinetics of Brownian particles in the presence of both multiplicative and additive noises. In addition to the drift due to the self aggregation process, the environment may induce a drift term in the presence of a multiplicative noise. Then there would be an interplay between the two drift terms. It may account qualitatively the appearance of the different laws of aggregation process. At low strength of white multiplicative noise, the cluster number decreases as a Gaussian function of time. If the noise strength becomes appreciably large then the variation of cluster number with time is fitted well by the mono exponentially decaying function of time. For additive noise driven case, the decrease of cluster number can be described by the power law. But in case of multiplicative colored driven process, cluster number decays multi exponentially. However, we have explored how the rate constant (in the mono exponentially cluster number decaying case) depends on strength of interference of the noises and their intensity. We have also explored how the structure factor at long time depends on the strength of the cross correlation (CC) between the additive and the multiplicative noises.
A Multi-spacecraft View of a Giant Filament Eruption during 2009 September 26/27
NASA Astrophysics Data System (ADS)
Gosain, Sanjay; Schmieder, Brigitte; Artzner, Guy; Bogachev, Sergei; Török, Tibor
2012-12-01
We analyze multi-spacecraft observations of a giant filament eruption that occurred during 2009 September 26 and 27. The filament eruption was associated with a relatively slow coronal mass ejection. The filament consisted of a large and a small part, and both parts erupted nearly simultaneously. Here we focus on the eruption associated with the larger part of the filament. The STEREO satellites were separated by about 117° during this event, so we additionally used SoHO/EIT and CORONAS/TESIS observations as a third eye (Earth view) to aid our measurements. We measure the plane-of-sky trajectory of the filament as seen from STEREO-A and TESIS viewpoints. Using a simple trigonometric relation, we then use these measurements to estimate the true direction of propagation of the filament which allows us to derive the true R/R ⊙-time profile of the filament apex. Furthermore, we develop a new tomographic method that can potentially provide a more robust three-dimensional (3D) reconstruction by exploiting multiple simultaneous views. We apply this method also to investigate the 3D evolution of the top part of filament. We expect this method to be useful when SDO and STEREO observations are combined. We then analyze the kinematics of the eruptive filament during its rapid acceleration phase by fitting different functional forms to the height-time data derived from the two methods. We find that for both methods an exponential function fits the rise profile of the filament slightly better than parabolic or cubic functions. Finally, we confront these results with the predictions of theoretical eruption models.
OMFIT Tokamak Profile Data Fitting and Physics Analysis
Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...
2018-01-22
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
OMFIT Tokamak Profile Data Fitting and Physics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, N. C.; Grierson, B. A.; Haskey, S. R.
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
Determination of time of death in forensic science via a 3-D whole body heat transfer model.
Bartgis, Catherine; LeBrun, Alexander M; Ma, Ronghui; Zhu, Liang
2016-12-01
This study is focused on developing a whole body heat transfer model to accurately simulate temperature decay in a body postmortem. The initial steady state temperature field is simulated first and the calculated weighted average body temperature is used to determine the overall heat transfer coefficient at the skin surface, based on thermal equilibrium before death. The transient temperature field postmortem is then simulated using the same boundary condition and the temperature decay curves at several body locations are generated for a time frame of 24h. For practical purposes, curve fitting techniques are used to replace the simulations with a proposed exponential formula with an initial time delay. It is shown that the obtained temperature field in the human body agrees very well with that in the literature. The proposed exponential formula provides an excellent fit with an R 2 value larger than 0.998. For the brain and internal organ sites, the initial time delay varies from 1.6 to 2.9h, when the temperature at the measuring site does not change significantly from its original value. The curve-fitted time constant provides the measurement window after death to be between 8h and 31h if the brain site is used, while it increases 60-95% at the internal organ site. The time constant is larger when the body is exposed to colder air, since a person usually wears more clothing when it is cold outside to keep the body warm and comfortable. We conclude that a one-size-fits-all approach would lead to incorrect estimation of time of death and it is crucial to generate a database of cooling curves taking into consideration all the important factors such as body size and shape, environmental conditions, etc., therefore, leading to accurate determination of time of death. Copyright © 2016 Elsevier Ltd. All rights reserved.
Theoretical analysis of exponential transversal method of lines for the diffusion equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, A.; Raydan, M.; Campo, A.
1996-12-31
Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less
A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography
NASA Astrophysics Data System (ADS)
Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George
2016-06-01
State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling rate), in terms of energy resolution and image signal to noise ratio for both gamma ray energies. The Levenberg-Marquardt (LM) non-linear least-squares algorithm was used, in post processing, in order to fit the acquired data with the proposed model. The results showed that analog pulses prior to digitization are being estimated with high accuracy after fitting with the bi-exponential model.
On the origin of non-exponential fluorescence decays in enzyme-ligand complex
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2004-05-01
Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).
Compact exponential product formulas and operator functional derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki, M.
1997-02-01
A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
Emitting electron spectra and acceleration processes in the jet of PKS 0447-439
NASA Astrophysics Data System (ADS)
Zhou, Yao; Yan, Dahai; Dai, Benzhong; Zhang, Li
2014-02-01
We investigate the electron energy distributions (EEDs) and the corresponding acceleration processes in the jet of PKS 0447-439, and estimate its redshift through modeling its observed spectral energy distribution (SED) in the frame of a one-zone synchrotron-self Compton (SSC) model. Three EEDs formed in different acceleration scenarios are assumed: the power-law with exponential cut-off (PLC) EED (shock-acceleration scenario or the case of the EED approaching equilibrium in the stochastic-acceleration scenario), the log-parabolic (LP) EED (stochastic-acceleration scenario and the acceleration dominating), and the broken power-law (BPL) EED (no acceleration scenario). The corresponding fluxes of both synchrotron and SSC are then calculated. The model is applied to PKS 0447-439, and modeled SEDs are compared to the observed SED of this object by using the Markov Chain Monte Carlo method. The results show that the PLC model fails to fit the observed SED well, while the LP and BPL models give comparably good fits for the observed SED. The results indicate that it is possible that a stochastic acceleration process acts in the emitting region of PKS 0447-439 and the EED is far from equilibrium (acceleration dominating) or no acceleration process works (in the emitting region). The redshift of PKS 0447-439 is also estimated in our fitting: z = 0.16 ± 0.05 for the LP case and z = 0.17 ± 0.04 for BPL case.
NASA Astrophysics Data System (ADS)
Kapanen, Mika; Tenhunen, Mikko; Hämäläinen, Tuomo; Sipilä, Petri; Parkkinen, Ritva; Järvinen, Hannu
2006-07-01
Quality control (QC) data of radiotherapy linear accelerators, collected by Helsinki University Central Hospital between the years 2000 and 2004, were analysed. The goal was to provide information for the evaluation and elaboration of QC of accelerator outputs and to propose a method for QC data analysis. Short- and long-term drifts in outputs were quantified by fitting empirical mathematical models to the QC measurements. Normally, long-term drifts were well (<=1%) modelled by either a straight line or a single-exponential function. A drift of 2% occurred in 18 ± 12 months. The shortest drift times of only 2-3 months were observed for some new accelerators just after the commissioning but they stabilized during the first 2-3 years. The short-term reproducibility and the long-term stability of local constancy checks, carried out with a sealed plane parallel ion chamber, were also estimated by fitting empirical models to the QC measurements. The reproducibility was 0.2-0.5% depending on the positioning practice of a device. Long-term instabilities of about 0.3%/month were observed for some checking devices. The reproducibility of local absorbed dose measurements was estimated to be about 0.5%. The proposed empirical model fitting of QC data facilitates the recognition of erroneous QC measurements and abnormal output behaviour, caused by malfunctions, offering a tool to improve dose control.
NASA Astrophysics Data System (ADS)
Vogelsang, R.; Hoheisel, C.
1987-02-01
Molecular-dynamics (MD) calculations are reported for three thermodynamic states of a Lennard-Jones fluid. Systems of 2048 particles and 105 integration steps were used. The transverse current autocorrelation function, Ct(k,t), has been determined for wave vectors of the range 0.5<||k||σ<1.5. Ct(k,t) was fitted by hydrodynamic-type functions. The fits returned k-dependent decay times and shear viscosities which showed a systematic behavior as a function of k. Extrapolation to the hydrodynamic region at k=0 gave shear viscosity coefficients in good agreement with direct Green-Kubo results obtained in previous work. The two-exponential model fit for the memory function proposed by other authors does not provide a reasonable description of the MD results, as the fit parameters show no systematic wave-vector dependence, although the Ct(k,t) functions are somewhat better fitted. Similarly, the semiempirical interpolation formula for the decay time based on the viscoelastic concept proposed by Akcasu and Daniels fails to reproduce the correct k dependence for the wavelength range investigated herein.
Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook
2015-01-01
Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374
Identical superdeformed bands in yrast 152Dy: a systematic description
NASA Astrophysics Data System (ADS)
Dadwal, Anshul; Mittal, H. M.
2018-06-01
The nuclear softness (NS) formula, semiclassical particle rotor model (PRM) and modified exponential model with pairing attenuation are used for the systematic study of the identical superdeformed bands in the A ∼ 150 mass region. These formulae/models are employed to study the identical superdeformed bands relative to the yrast SD band 152Dy(1), {152Dy(1), 151Tb(2)}, {152Dy(1), 151Dy(4)} (midpoint), {152Dy(1), 153Dy(2)} (quarter point), {152Dy(1), 153Dy(3)} (three-quarter point). The parameters, baseline moment of inertia ({{I}}0), alignment (i) and effective pairing parameter (Δ0) are calculated using the least-squares fitting of the γ-ray transitions energies in the NS formula, semiclassical-PRM and modified exponential model with pairing attenuation, respectively. The calculated parameters are found to depend sensitively on the proposed baseline spin (I 0).
Harms, Floor A; de Boon, Wadim M I; Balestra, Gianmarco M; Bodmer, Sander I A; Johannes, Tanja; Stolker, Robert J; Mik, Egbert G
2011-10-01
Mitochondrial oxygen tension can be measured in vivo by means of oxygen-dependent quenching of delayed fluorescence of protoporphyrin IX (PpIX). Here we demonstrate that delayed fluorescence is readily observed from skin in rat and man after topical application of the PpIX precursor 5-aminolevulinic acid (ALA). Delayed fluorescence lifetimes respond to changes in inspired oxygen fraction and blood supply. The signals contain lifetime distributions and the fitting of rectangular distributions to the data appears more adequate than mono-exponential fitting. The use of topically applied ALA for delayed fluorescence lifetime measurements might pave the way for clinical use of this technique. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
libprofit: Image creation from luminosity profiles
NASA Astrophysics Data System (ADS)
Robotham, A. S. G.; Taranu, D.; Tobar, R.
2016-12-01
libprofit is a C++ library for image creation based on different luminosity profiles. It offers fast and accurate two-dimensional integration for a useful number of profiles, including Sersic, Core-Sersic, broken-exponential, Ferrer, Moffat, empirical King, point-source and sky, with a simple mechanism for adding new profiles. libprofit provides a utility to read the model and profile parameters from the command-line and generate the corresponding image. It can output the resulting image as text values, a binary stream, or as a simple FITS file. It also provides a shared library exposing an API that can be used by any third-party application. R and Python interfaces are available: ProFit (ascl:1612.004) and PyProfit (ascl:1612.005).
Exponential propagators for the Schrödinger equation with a time-dependent potential.
Bader, Philipp; Blanes, Sergio; Kopylov, Nikita
2018-06-28
We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.
Exponential convergence through linear finite element discretization of stratified subdomains
NASA Astrophysics Data System (ADS)
Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali
2016-10-01
Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.
A Field Study of Pixel-Scale Variability of Raindrop Size Distribution in the MidAtlantic Region
NASA Technical Reports Server (NTRS)
Tokay, Ali; D'adderio, Leo Pio; Wolff, David P.; Petersen, Walter A.
2016-01-01
The spatial variability of parameters of the raindrop size distribution and its derivatives is investigated through a field study where collocated Particle Size and Velocity (Parsivel2) and two-dimensional video disdrometers were operated at six sites at Wallops Flight Facility, Virginia, from December 2013 to March 2014. The three-parameter exponential function was employed to determine the spatial variability across the study domain where the maximum separation distance was 2.3 km. The nugget parameter of the exponential function was set to 0.99 and the correlation distance d0 and shape parameter s0 were retrieved by minimizing the root-mean-square error, after fitting it to the correlations of physical parameters. Fits were very good for almost all 15 physical parameters. The retrieved d0 and s0 were about 4.5 km and 1.1, respectively, for rain rate (RR) when all 12 disdrometers were reporting rainfall with a rain-rate threshold of 0.1 mm h1 for 1-min averages. The d0 decreased noticeably when one or more disdrometers were required to report rain. The d0 was considerably different for a number of parameters (e.g., mass-weighted diameter) but was about the same for the other parameters (e.g., RR) when rainfall threshold was reset to 12 and 18 dBZ for Ka- and Ku-band reflectivity, respectively, following the expected Global Precipitation Measurement missions spaceborne radar minimum detectable signals. The reduction of the database through elimination of a site did not alter d0 as long as the fit was adequate. The correlations of 5-min rain accumulations were lower when disdrometer observations were simulated for a rain gauge at different bucket sizes.
NASA Astrophysics Data System (ADS)
Segall, P.
2017-12-01
Distinguishing magma chamber pressurization from relaxation of a viscoelastic aureole surrounding the chamber based on geodetic measurements has remained challenging. Elastic models with mass inflow proportional to the pressure difference between the chamber and a deep reservoir predict exponentially decaying flux. For a spherical chamber surrounded by a Maxwell viscoelastic shell with pressure dependent recharge, the surface deformation is the sum of two exponentials (Segall, 2016). GPS displacements following eruptions of Grímsvötn, Iceland in 2004 and 2011 exhibit rapid post-eruptive inflation (time scale of 0.1 yr), followed by inflation with a much longer time constant. Markov Chain Monte Carlo inversion with the viscoelastic model shows the GPS time series can be fit with viscosity of 2e16 Pa-s, and a relatively incompressible magma, B = beta_c/ (beta_m + beta_c) > 0.6, where beta_m and beta_c are chamber and magma compressibility. The latter appears to conflict with the ratio of erupted volume to geodetically inferred source volume change, rv 10, obtained for the best fitting spherical (Mogi ) source (Hreinsdóttir, 2014). Since rv = 1/B, this implies a relatively compressible melt, B 0.1. Reexamination of the co-eruptive GPS and tilt data with the more general ellipsoidal model of Cervelli (2013), reveals that the best fitting sources are oblate (b/a 3), deeper, and with larger volume changes, rv 3, relative to spherical models. Oblate magma chambers are consistent with seismic tomography. FEM calculations including free surface effects lead to even larger co-eruptive volume changes, smaller rv and hence larger B. I conclude that the data are consistent with rapid post-eruptive inflation driven by viscoelastic relaxation with a relatively incompressible magma, although other interpretations will be discussed.
Temperature Responses of Soil Organic Matter Components With Varying Recalcitrance
NASA Astrophysics Data System (ADS)
Simpson, M. J.; Feng, X.
2007-12-01
The response of soil organic matter (SOM) to global warming remains unclear partly due to the chemical heterogeneity of SOM composition. In this study, the decomposition of SOM from two grassland soils was investigated in a one-year laboratory incubation at six different temperatures. SOM was separated into solvent- extractable compounds, suberin- and cutin-derived compounds, and lignin monomers by solvent extraction, base hydrolysis, and CuO oxidation, respectively. These SOM components had distinct chemical structures and recalcitrance, and their decomposition was fitted by a two-pool exponential decay model. The stability of SOM components was assessed using geochemical parameters and kinetic parameters derived from model fitting. Lignin monomers exhibited much lower decay rates than solvent-extractable compounds and a relatively low percentage of lignin monomers partitioned into the labile SOM pool, which confirmed the generally accepted recalcitrance of lignin compounds. Suberin- and cutin-derived compounds had a poor fitting for the exponential decay model, and their recalcitrance was shown by the geochemical degradation parameter which stabilized during the incubation. The aliphatic components of suberin degraded faster than cutin-derived compounds, suggesting that cutin-derived compounds in the soil may be at a higher stage of degradation than suberin- derived compounds. The temperature sensitivity of decomposition, expressed as Q10, was derived from the relationship between temperature and SOM decay rates. SOM components exhibited varying temperature responses and the decomposition of the recalcitrant lignin monomers had much higher Q10 values than soil respiration or the solvent-extractable compounds decomposition. Our study shows that the decomposition of recalcitrant SOM is highly sensitive to temperature, more so than bulk soil mineralization. This observation suggests a potential acceleration in the degradation of the recalcitrant SOM pool with global warming.
Demers, Hendrix; Ramachandra, Ranjan; Drouin, Dominique; de Jonge, Niels
2012-01-01
Lateral profiles of the electron probe of scanning transmission electron microscopy (STEM) were simulated at different vertical positions in a micrometers-thick carbon sample. The simulations were carried out using the Monte Carlo method in the CASINO software. A model was developed to fit the probe profiles. The model consisted of the sum of a Gaussian function describing the central peak of the profile, and two exponential decay functions describing the tail of the profile. Calculations were performed to investigate the fraction of unscattered electrons as function of the vertical position of the probe in the sample. Line scans were also simulated over gold nanoparticles at the bottom of a carbon film to calculate the achievable resolution as function of the sample thickness and the number of electrons. The resolution was shown to be noise limited for film thicknesses less than 1 μm. Probe broadening limited the resolution for thicker films. The validity of the simulation method was verified by comparing simulated data with experimental data. The simulation method can be used as quantitative method to predict STEM performance or to interpret STEM images of thick specimens. PMID:22564444
Complexity analysis based on generalized deviation for financial markets
NASA Astrophysics Data System (ADS)
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
A flexible, interactive software tool for fitting the parameters of neuronal models.
Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.
LaCroix, Ryan A; Sandberg, Troy E; O'Brien, Edward J; Utrilla, Jose; Ebrahim, Ali; Guzman, Gabriela I; Szubin, Richard; Palsson, Bernhard O; Feist, Adam M
2015-01-01
Adaptive laboratory evolution (ALE) has emerged as an effective tool for scientific discovery and addressing biotechnological needs. Much of ALE's utility is derived from reproducibly obtained fitness increases. Identifying causal genetic changes and their combinatorial effects is challenging and time-consuming. Understanding how these genetic changes enable increased fitness can be difficult. A series of approaches that address these challenges was developed and demonstrated using Escherichia coli K-12 MG1655 on glucose minimal media at 37°C. By keeping E. coli in constant substrate excess and exponential growth, fitness increases up to 1.6-fold were obtained compared to the wild type. These increases are comparable to previously reported maximum growth rates in similar conditions but were obtained over a shorter time frame. Across the eight replicate ALE experiments performed, causal mutations were identified using three approaches: identifying mutations in the same gene/region across replicate experiments, sequencing strains before and after computationally determined fitness jumps, and allelic replacement coupled with targeted ALE of reconstructed strains. Three genetic regions were most often mutated: the global transcription gene rpoB, an 82-bp deletion between the metabolic pyrE gene and rph, and an IS element between the DNA structural gene hns and tdk. Model-derived classification of gene expression revealed a number of processes important for increased growth that were missed using a gene classification system alone. The methods described here represent a powerful combination of technologies to increase the speed and efficiency of ALE studies. The identified mutations can be examined as genetic parts for increasing growth rate in a desired strain and for understanding rapid growth phenotypes. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
LaCroix, Ryan A.; Sandberg, Troy E.; O'Brien, Edward J.; Utrilla, Jose; Ebrahim, Ali; Guzman, Gabriela I.; Szubin, Richard; Palsson, Bernhard O.
2014-01-01
Adaptive laboratory evolution (ALE) has emerged as an effective tool for scientific discovery and addressing biotechnological needs. Much of ALE's utility is derived from reproducibly obtained fitness increases. Identifying causal genetic changes and their combinatorial effects is challenging and time-consuming. Understanding how these genetic changes enable increased fitness can be difficult. A series of approaches that address these challenges was developed and demonstrated using Escherichia coli K-12 MG1655 on glucose minimal media at 37°C. By keeping E. coli in constant substrate excess and exponential growth, fitness increases up to 1.6-fold were obtained compared to the wild type. These increases are comparable to previously reported maximum growth rates in similar conditions but were obtained over a shorter time frame. Across the eight replicate ALE experiments performed, causal mutations were identified using three approaches: identifying mutations in the same gene/region across replicate experiments, sequencing strains before and after computationally determined fitness jumps, and allelic replacement coupled with targeted ALE of reconstructed strains. Three genetic regions were most often mutated: the global transcription gene rpoB, an 82-bp deletion between the metabolic pyrE gene and rph, and an IS element between the DNA structural gene hns and tdk. Model-derived classification of gene expression revealed a number of processes important for increased growth that were missed using a gene classification system alone. The methods described here represent a powerful combination of technologies to increase the speed and efficiency of ALE studies. The identified mutations can be examined as genetic parts for increasing growth rate in a desired strain and for understanding rapid growth phenotypes. PMID:25304508
A flexible, interactive software tool for fitting the parameters of neuronal models
Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs
2014-01-01
The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540
Thermal desorption of formamide and methylamine from graphite and amorphous water ice surfaces
NASA Astrophysics Data System (ADS)
Chaabouni, H.; Diana, S.; Nguyen, T.; Dulieu, F.
2018-04-01
Context. Formamide (NH2CHO) and methylamine (CH3NH2) are known to be the most abundant amine-containing molecules in many astrophysical environments. The presence of these molecules in the gas phase may result from thermal desorption of interstellar ices. Aims: The aim of this work is to determine the values of the desorption energies of formamide and methylamine from analogues of interstellar dust grain surfaces and to understand their interaction with water ice. Methods: Temperature programmed desorption (TPD) experiments of formamide and methylamine ices were performed in the sub-monolayer and monolayer regimes on graphite (HOPG) and non-porous amorphous solid water (np-ASW) ice surfaces at temperatures 40-240 K. The desorption energy distributions of these two molecules were calculated from TPD measurements using a set of independent Polanyi-Wigner equations. Results: The maximum of the desorption of formamide from both graphite and ASW ice surfaces occurs at 176 K after the desorption of H2O molecules, whereas the desorption profile of methylamine depends strongly on the substrate. Solid methylamine starts to desorb below 100 K from the graphite surface. Its desorption from the water ice surface occurs after 120 K and stops during the water ice sublimation around 150 K. It continues to desorb from the graphite surface at temperatures higher than160 K. Conclusions: More than 95% of solid NH2CHO diffuses through the np-ASW ice surface towards the graphitic substrate and is released into the gas phase with a desorption energy distribution Edes = 7460-9380 K, which is measured with the best-fit pre-exponential factor A = 1018 s-1. However, the desorption energy distribution of methylamine from the np-ASW ice surface (Edes = 3850-8420 K) is measured with the best-fit pre-exponential factor A = 1012 s-1. A fraction of solid methylamine monolayer of roughly 0.15 diffuses through the water ice surface towards the HOPG substrate. This small amount of methylamine desorbs later with higher binding energies (5050-8420 K) that exceed that of the crystalline water ice (Edes = 4930 K), which is calculated with the same pre-exponential factor A = 1012 s-1. The best wetting ability of methylamine compared to H2O molecules makes CH3NH2 molecules a refractory species for low coverage. Other binding energies of astrophysical relevant molecules are gathered and compared, but we could not link the chemical functional groups (amino, methyl, hydroxyl, and carbonyl) with the binding energy properties. Implications of these high binding energies are discussed.
Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater
Shabbir, Javid; M. AbdEl-Salam, Nasser; Hussain, Tajammal
2016-01-01
Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design. PMID:27683016
Bader, E; Hrudey, S; Froese, K
2004-01-01
Methods: A longitudinal pilot exposure/intervention study measured the elimination half life of TCAA in urine. Beverage consumption was limited to a public water supply and bottled water of known TCAA concentration, and ingestion volume was managed. The five participants limited fluid consumption to only the water provided. Consumption journals were kept by each participant and their daily first morning urine (FMU) samples were analysed for TCAA and creatinine. TCAA elimination half life curves were generated from a two week washout period using TCAA-free bottled water. Results: Individual elimination half lives ranged from 2.1 to 6.3 days, for single compartment exponential decay, the model which fit the data. Conclusion: Urinary TCAA is persistent enough to be viable as a biomarker of medium term (days) exposure to drinking water TCAA ingestion within a range of realistic concentrations. PMID:15258281
Solar tri-diurnal variation of cosmic rays in a wide range of rigidity
NASA Technical Reports Server (NTRS)
Mori, S.; Ueno, H.; Fujii, Z.; Morishita, I.; Nagashima, K.
1985-01-01
Solar tri-diurnal variations of cosmic rays have been analyzed in a wide range of rigidity, using data from neutron monitors, and the surface and underground muon telescopes for the period 1978-1983. The rigidity spectrum of the anisotropy in space is assumed to be of power-exponential type as (P/gamma P sub o) to the gamma exp (gamma-P/P sub o). By means of the best-fit method between the observed and the expected variations, it is obtained that the spectrum has a peak at P (=gamma P sub o) approx = 90 GV, where gamma=approx 3.0 and P sub o approx. 30 GV. The phase in space of the tri-diurnal variation is also obtained as 7.0 hr (15 hr and 23 hr LT), which is quite different from that of approx. 1 hr. arising from the axisymmetric distribution of cosmic rays with respect to the IMF.
An analytic method to account for drag in the Vinti Satellite theory
NASA Technical Reports Server (NTRS)
Watson, J. S.; Mistretta, G. D.; Bonavito, N. L.
1974-01-01
To retain separability in the Vinti theory of earth satellite motion when a nonconservative force such as air drag is considered, a set of variational equations for the orbital elements are introduced, and expressed as functions of the transverse, radial, and normal components of the nonconservative forces acting on the system. In this approach, the Hamiltonian is preserved in form, and remains the total energy, but the initial or boundary conditions and hence the Jacobi constants of the motion advance with time through the variational equations. In particular, the atmospheric density profile is written as a fitted exponential function of the eccentric anomaly, which adheres to tabular data at all altitudes and simultaneously reduced the variational equations to indefinite integrals with closed form evaluations. The values of the limits for any arbitrary time interval are obtained from the Vinti program.
Viability estimation of pepper seeds using time-resolved photothermal signal characterization
NASA Astrophysics Data System (ADS)
Kim, Ghiseok; Kim, Geon-Hee; Lohumi, Santosh; Kang, Jum-Soon; Cho, Byoung-Kwan
2014-11-01
We used infrared thermal signal measurement system and photothermal signal and image reconstruction techniques for viability estimation of pepper seeds. Photothermal signals from healthy and aged seeds were measured for seven periods (24, 48, 72, 96, 120, 144, and 168 h) using an infrared camera and analyzed by a regression method. The photothermal signals were regressed using a two-term exponential decay curve with two amplitudes and two time variables (lifetime) as regression coefficients. The regression coefficients of the fitted curve showed significant differences for each seed groups, depending on the aging times. In addition, the viability of a single seed was estimated by imaging of its regression coefficient, which was reconstructed from the measured photothermal signals. The time-resolved photothermal characteristics, along with the regression coefficient images, can be used to discriminate the aged or dead pepper seeds from the healthy seeds.
Self-Elongation with Sequential Folding of a Filament of Bacterial Cells
NASA Astrophysics Data System (ADS)
Honda, Ryojiro; Wakita, Jun-ichi; Katori, Makoto
2015-11-01
Under hard-agar and nutrient-rich conditions, a cell of Bacillus subtilis grows as a single filament owing to the failure of cell separation after each growth and division cycle. The self-elongating filament of cells shows sequential folding processes, and multifold structures extend over an agar plate. We report that the growth process from the exponential phase to the stationary phase is well described by the time evolution of fractal dimensions of the filament configuration. We propose a method of characterizing filament configurations using a set of lengths of multifold parts of a filament. Systems of differential equations are introduced to describe the folding processes that create multifold structures in the early stage of the growth process. We show that the fitting of experimental data to the solutions of equations is excellent, and the parameters involved in our model systems are determined.
NASA Astrophysics Data System (ADS)
Valença, J. V. B.; Silveira, I. S.; Silva, A. C. A.; Dantas, N. O.; Antonio, P. L.; Caldas, L. V. E.; d'Errico, F.; Souza, S. O.
2017-11-01
The OSL characteristics of three different borate glass matrices containing magnesia (LMB), quicklime (LCB) or potassium carbonate (LKB) were examined. Five different formulations for each composition were produced using a melt-quenching method and analyzed in terms of both dose-response curves and OSL shape decay. The samples were irradiated using a 90Sr/90Y beta source with doses up to 30 Gy. Dose-response curves were plotted using the initial OSL intensity as the chosen parameter. The OSL analysis showed that LKB glasses are the most sensitive to beta irradiation. For the most sensitive LKB composition, the irradiation process was also done using a 60Co gamma source in a dose range from 200 to 800 Gy. In all cases, no saturation was observed. A fitting process using a three-term exponential function was performed for the most sensitive formulations of each composition, which suggested a similar behavior in the OSL decay.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao
2018-01-01
A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492
Particle yields from numerical simulations
NASA Astrophysics Data System (ADS)
Homor, Marietta M.; Jakovác, Antal
2018-04-01
In this paper we use numerical field theoretical simulations to calculate particle yields. We demonstrate that in the model of local particle creation the deviation from the pure exponential distribution is natural even in equilibrium, and an approximate Tsallis-Pareto-like distribution function can be well fitted to the calculated yields, in accordance with the experimental observations. We present numerical simulations in the classical Φ4 model as well as in the SU(3) quantum Yang-Mills theory to clarify this issue.
Zhang, Guangwen; Wang, Shuangshuang; Wen, Didi; Zhang, Jing; Wei, Xiaocheng; Ma, Wanling; Zhao, Weiwei; Wang, Mian; Wu, Guosheng; Zhang, Jinsong
2016-12-09
Water molecular diffusion in vivo tissue is much more complicated. We aimed to compare non-Gaussian diffusion models of diffusion-weighted imaging (DWI) including intra-voxel incoherent motion (IVIM), stretched-exponential model (SEM) and Gaussian diffusion model at 3.0 T MRI in patients with rectal cancer, and to determine the optimal model for investigating the water diffusion properties and characterization of rectal carcinoma. Fifty-nine consecutive patients with pathologically confirmed rectal adenocarcinoma underwent DWI with 16 b-values at a 3.0 T MRI system. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models (IVIM-mono, IVIM-bi and SEM) on primary tumor and adjacent normal rectal tissue. Parameters of standard apparent diffusion coefficient (ADC), slow- and fast-ADC, fraction of fast ADC (f), α value and distributed diffusion coefficient (DDC) were generated and compared between the tumor and normal tissues. The SEM exhibited the best fitting results of actual DWI signal in rectal cancer and the normal rectal wall (R 2 = 0.998, 0.999 respectively). The DDC achieved relatively high area under the curve (AUC = 0.980) in differentiating tumor from normal rectal wall. Non-Gaussian diffusion models could assess tissue properties more accurately than the ADC derived Gaussian diffusion model. SEM may be used as a potential optimal model for characterization of rectal cancer.
Lee, Peter N; Fry, John S; Hamling, Jan S
2012-10-01
No previous review has formally modelled the decline in IHD risk following quitting smoking. From PubMed searches and other sources we identified 15 prospective and eight case-control studies that compared IHD risk in current smokers, never smokers, and quitters by time period of quit, some studies providing separate blocks of results by sex, age or amount smoked. For each of 41 independent blocks, we estimated, using the negative exponential model, the time, H, when the excess risk reduced to half that caused by smoking. Goodness-of-fit to the model was adequate for 35 blocks, others showing a non-monotonic pattern of decline following quitting, with a variable pattern of misfit. After omitting one block with a current smoker RR 1.0, the combined H estimate was 4.40 (95% CI 3.26-5.95) years. There was considerable heterogeneity, H being <2years for 10 blocks and >10years for 12. H increased (p<0.001) with mean age at study start, but not clearly with other factors. Sensitivity analyses allowing for reverse causation, or varying assumed midpoint times for the final open-ended quitting period little affected goodness-of-fit of the combined estimate. The US Surgeon-General's view that excess risk approximately halves after a year's abstinence seems over-optimistic. Copyright © 2012 Elsevier Inc. All rights reserved.
Distribution of fixed beneficial mutations and the rate of adaptation in asexual populations
Good, Benjamin H.; Rouzine, Igor M.; Balick, Daniel J.; Hallatschek, Oskar; Desai, Michael M.
2012-01-01
When large asexual populations adapt, competition between simultaneously segregating mutations slows the rate of adaptation and restricts the set of mutations that eventually fix. This phenomenon of interference arises from competition between mutations of different strengths as well as competition between mutations that arise on different fitness backgrounds. Previous work has explored each of these effects in isolation, but the way they combine to influence the dynamics of adaptation remains largely unknown. Here, we describe a theoretical model to treat both aspects of interference in large populations. We calculate the rate of adaptation and the distribution of fixed mutational effects accumulated by the population. We focus particular attention on the case when the effects of beneficial mutations are exponentially distributed, as well as on a more general class of exponential-like distributions. In both cases, we show that the rate of adaptation and the influence of genetic background on the fixation of new mutants is equivalent to an effective model with a single selection coefficient and rescaled mutation rate, and we explicitly calculate these effective parameters. We find that the effective selection coefficient exactly coincides with the most common fixed mutational effect. This equivalence leads to an intuitive picture of the relative importance of different types of interference effects, which can shift dramatically as a function of the population size, mutation rate, and the underlying distribution of fitness effects. PMID:22371564
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-01-01
An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000
NASA Astrophysics Data System (ADS)
Thomas, Marlon Sheldon
Bacterial infections continue to be one of the major health risks in the United States. The common occurrence of such infection is one of the major contributors to the high cost of health care and significant patient mortality. The work presented in this thesis describes spectroscopic studies that will contribute to the development of a fluorescent assay that may allow the rapid identification of bacterial species. Herein, the optical interactions between six bacterial species and a series of thiacyanine dyes are investigated. The interactions between the dyes and the bacterial species are hypothesized to be species-specific. For this thesis, two Gram-negative strains, Escherichia coli (E. coli) TOP10 and Enterobacter aerogenes; two Gram-positive bacterial strains, Bacillus sphaericus and Bacillus subtilis; and two Bacillus endospores, B. globigii and B. thuringiensis, were used to test the proposed hypothesis. A series of three thiacyanine dyes---3,3'-diethylthiacyanine iodide (THIA), 3,3'-diethylthiacarbocyanine iodide (THC) and thiazole orange (THO)---were used as fluorescent probes. The basis of our spectroscopic study was to explore the bacterium-induced interactions of the bacterial cells with the individual thiacyanine dyes or with a mixture of the three dyes. Steady-state absorption spectroscopy revealed that the different bacterial species altered the absorption properties of the dyes. Mixed-dye solutions gave unique absorption patterns for each bacteria tested, with competitive binding observed between the bacteria and spectrophotometric probes (thiacyanine dyes). Emission spectroscopy recorded changes in the emission spectra of THIA following the introduction of bacterial cells. Experimental results revealed that the emission enhancement of the dyes resulted from increases in the emission quantum yield of the thiacyanine dyes upon binding to the bacteria cellular components. The recorded emission enhancement data were fitted to an exponential (mono-exponential or bi-exponential) function, and time constants were extracted by regressing on the experimental data. The addition of the TWEEN surfactants decreased the rate at which the dyes interacted with the bacterial cells, which typically resulted in larger time constants derived from an exponential fit. ANOVA analysis of the time constants confirmed that the values of the time constants clustered in a narrow range and were independent of dye concentration and weakly dependent on cell density.
Sleep, John; Irving, Malcolm; Burton, Kevin
2005-03-15
The time course of isometric force development following photolytic release of ATP in the presence of Ca(2+) was characterized in single skinned fibres from rabbit psoas muscle. Pre-photolysis force was minimized using apyrase to remove contaminating ATP and ADP. After the initial force rise induced by ATP release, a rapid shortening ramp terminated by a step stretch to the original length was imposed, and the time course of the subsequent force redevelopment was again characterized. Force development after ATP release was accurately described by a lag phase followed by one or two exponential components. At 20 degrees C, the lag was 5.6 +/- 0.4 ms (s.e.m., n = 11), and the force rise was well fitted by a single exponential with rate constant 71 +/- 4 s(-1). Force redevelopment after shortening-restretch began from about half the plateau force level, and its single-exponential rate constant was 68 +/- 3 s(-1), very similar to that following ATP release. When fibres were activated by the addition of Ca(2+) in ATP-containing solution, force developed more slowly, and the rate constant for force redevelopment following shortening-restretch reached a maximum value of 38 +/- 4 s(-1) (n = 6) after about 6 s of activation. This lower value may be associated with progressive sarcomere disorder at elevated temperature. Force development following ATP release was much slower at 5 degrees C than at 20 degrees C. The rate constant of a single-exponential fit to the force rise was 4.3 +/- 0.4 s(-1) (n = 22), and this was again similar to that after shortening-restretch in the same activation at this temperature, 3.8 +/- 0.2 s(-1). We conclude that force development after ATP release and shortening-restretch are controlled by the same steps in the actin-myosin ATPase cycle. The present results and much previous work on mechanical-chemical coupling in muscle can be explained by a kinetic scheme in which force is generated by a rapid conformational change bracketed by two biochemical steps with similar rate constants -- ATP hydrolysis and the release of inorganic phosphate -- both of which combine to control the rate of force development.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
The CHESS Survey of the L1157-B1 Shock Region: CO Spectral Signatures of Jet-driven Bow Shocks
NASA Astrophysics Data System (ADS)
Lefloch, B.; Cabrit, S.; Busquet, G.; Codella, C.; Ceccarelli, C.; Cernicharo, J.; Pardo, J. R.; Benedettini, M.; Lis, D. C.; Nisini, B.
2012-10-01
The unprecedented sensitivity of Herschel coupled with the high resolution of the HIFI spectrometer permits studies of the intensity-velocity relationship I(v) in molecular outflows, over a higher excitation range than possible up to now. Over the course of the CHESS Key Program, we have observed toward the bright bow shock region L1157-B1, the CO rotational transitions between J = 5-4 and J = 16-15 with HIFI, and the J = 1-0, 2-1, and 3-2 with the IRAM 30 m and the Caltech Submillimeter Observatory telescopes. We find that all the line profiles I CO(v) are well fit by a linear combination of three exponential laws vpropexp (- |v/v 0|) with v 0 = 12.5, 4.4, and 2.5 km s-1. The first component dominates the CO emission at J >= 13, as well as the high-excitation lines of SiO and H2O. The second component dominates for 3 <= J up <= 10 and the third one for J up <= 2. We show that these exponentials are the signature of quasi-isothermal shocked gas components: the impact of the jet against the L1157-B1 bow shock (T k ~= 210 K), the walls of the outflow cavity associated with B1 (T k ~= 64 K), and the older cavity L1157-B2 (T k ~= 23 K), respectively. Analysis of the CO line flux in the large-velocity gradient approximation further shows that the emission arises from dense gas (n(H2) >= 105-106 cm-3) close to LTE up to J = 20. We find that the CO J = 2-1 intensity-velocity relation observed in various other molecular outflows is satisfactorily fit by similar exponential laws, which may hold an important clue to their entrainment process.
Channel response to sediment release: insights from a paired analysis of dam removal
Collins, Mathias J.; Snyder, Noah P.; Boardman, Graham; Banks, William S.; Andrews, Mary; Baker, Matthew E.; Conlon, Maricate; Gellis, Allen; McClain, Serena; Miller, Andrew; Wilcock, Peter
2017-01-01
Dam removals with unmanaged sediment releases are good opportunities to learn about channel response to abruptly increased bed material supply. Understanding these events is important because they affect aquatic habitats and human uses of floodplains. A longstanding paradigm in geomorphology holds that response rates to landscape disturbance exponentially decay through time. However, a previous study of the Merrimack Village Dam (MVD) removal on the Souhegan River in New Hampshire, USA, showed that an exponential function poorly described the early geomorphic response. Erosion of impounded sediments there was two-phased. We had an opportunity to quantitatively test the two-phase response model proposed for MVD by extending the record there and comparing it with data from the Simkins Dam removal on the Patapsco River in Maryland, USA. The watershed sizes are the same order of magnitude (102 km2), and at both sites low-head dams were removed (~3–4 m) and ~65 000 m3 of sand-sized sediments were discharged to low-gradient reaches. Analyzing four years of repeat morphometry and sediment surveys at the Simkins site, as well as continuous discharge and turbidity data, we observed the two-phase erosion response described for MVD. In the early phase, approximately 50% of the impounded sediment at Simkins was eroded rapidly during modest flows. After incision to base level and widening, a second phase began when further erosion depended on floods large enough to go over bank and access impounded sediments more distant from the newly-formed channel. Fitting functional forms to the data for both sites, we found that two-phase exponential models with changing decay constants fit the erosion data better than single-phase models. Valley width influences the two-phase erosion responses upstream, but downstream responses appear more closely related to local gradient, sediment re-supply from the upstream impoundments, and base flows.
Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.
2016-01-01
A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.
Roccato, Anna; Uyttendaele, Mieke; Membré, Jeanne-Marie
2017-06-01
In the framework of food safety, when mimicking the consumer phase, the storage time and temperature used are mainly considered as single point estimates instead of probability distributions. This singlepoint approach does not take into account the variability within a population and could lead to an overestimation of the parameters. Therefore, the aim of this study was to analyse data on domestic refrigerator temperatures and storage times of chilled food in European countries in order to draw general rules which could be used either in shelf-life testing or risk assessment. In relation to domestic refrigerator temperatures, 15 studies provided pertinent data. Twelve studies presented normal distributions, according to the authors or from the data fitted into distributions. Analysis of temperature distributions revealed that the countries were separated into two groups: northern European countries and southern European countries. The overall variability of European domestic refrigerators is described by a normal distribution: N (7.0, 2.7)°C for southern countries, and, N (6.1, 2.8)°C for the northern countries. Concerning storage times, seven papers were pertinent. Analysis indicated that the storage time was likely to end in the first days or weeks (depending on the product use-by-date) after purchase. Data fitting showed the exponential distribution was the most appropriate distribution to describe the time that food spent at consumer's place. The storage time was described by an exponential distribution corresponding to the use-by date period divided by 4. In conclusion, knowing that collecting data is time and money consuming, in the absence of data, and at least for the European market and for refrigerated products, building a domestic refrigerator temperature distribution using a Normal law and a time-to-consumption distribution using an Exponential law would be appropriate. Copyright © 2017 Elsevier Ltd. All rights reserved.
[Dynamics of Cry1ab protein content in the rhizosphere soil and straw debris of transgenic Bt corn].
Li, Fan; Wang, Min; Sun, Hong-Wei; Yang, Shu-Ke; Lu, Xing-Bo
2013-07-01
By using ELISA test kits, a field investigation was conducted on the degradation dynamics of CrylAb protein in the rhizosphere soil of Bt corn MON810 at its different growth stages and in the MON810 straws returned into field after harvest. Three models (shift-log model, exponential model, and bi-exponential model) were used to fit the degradation dynamics of the Cry1 Ab protein from the straw debris, and the DT50 and DT90, values were estimated. There existed great differences in the CrylAb protein content in the rhizosphere soil of MON810 at its different growth stages, but overall, the CrylAb protein content was decreased remarkably with the growth of MON810. The degradation of Cry1 Ab protein from the straws covered on soil surface and buried in soil showed the same two-stage pattern, i.e., more rapid at early stage and slow-stable in later period. Within the first week after straw return, the degradation rate of the CrylAb protein from the straws covered on soil surface was significantly higher than that from the straws buried in soil. At 10 d, the degradation rate of the CrylAb protein from the straws covered on soil surface and buried in soil was basically the same, being 88.8% and 88.6%, respectively. After 20 days, the degradation of CrylAb protein entered slow-stable stage. Till at 180 d, a small amount of Cry1Ab protein could still be detected in the straw debris. All of the three models used in this study could fit the decay pattern of the CrylAb protein from the straw debris in field. By comparing the correlation coefficient (r) and the consistency between the measured and calculated DT90, bi-exponential model was considered to be the best.
Assessing groundwater quality for irrigation using indicator kriging method
NASA Astrophysics Data System (ADS)
Delbari, Masoomeh; Amiri, Meysam; Motlagh, Masoud Bahraini
2016-11-01
One of the key parameters influencing sprinkler irrigation performance is water quality. In this study, the spatial variability of groundwater quality parameters (EC, SAR, Na+, Cl-, HCO3 - and pH) was investigated by geostatistical methods and the most suitable areas for implementation of sprinkler irrigation systems in terms of water quality are determined. The study was performed in Fasa county of Fars province using 91 water samples. Results indicated that all parameters are moderately to strongly spatially correlated over the study area. The spatial distribution of pH and HCO3 - was mapped using ordinary kriging. The probability of concentrations of EC, SAR, Na+ and Cl- exceeding a threshold limit in groundwater was obtained using indicator kriging (IK). The experimental indicator semivariograms were often fitted well by a spherical model for SAR, EC, Na+ and Cl-. For HCO3 - and pH, an exponential model was fitted to the experimental semivariograms. Probability maps showed that the risk of EC, SAR, Na+ and Cl- exceeding the given critical threshold is higher in lower half of the study area. The most proper agricultural lands for sprinkler irrigation implementation were identified by evaluating all probability maps. The suitable areas for sprinkler irrigation design were determined to be 25,240 hectares, which is about 34 percent of total agricultural lands and are located in northern and eastern parts. Overall the results of this study showed that IK is an appropriate approach for risk assessment of groundwater pollution, which is useful for a proper groundwater resources management.
Mäkelä, Valtteri; Wahlström, Ronny; Holopainen-Mantila, Ulla; Kilpeläinen, Ilkka; King, Alistair W T
2018-05-14
Herein, we describe a new method of assessing the kinetics of dissolution of single fibers by dissolution under limited dissolving conditions. The dissolution is followed by optical microscopy under limited dissolving conditions. Videos of the dissolution were processed in ImageJ to yield kinetics for dissolution, based on the disappearance of pixels associated with intact fibers. Data processing was performed using the Python language, utilizing available scientific libraries. The methods of processing the data include clustering of the single fiber data, identifying clusters associated with different fiber types, producing average dissolution traces and also extraction of practical parameters, such as, time taken to dissolve 25, 50, 75, 95, and 99.5% of the clustered fibers. In addition to these simple parameters, exponential fitting was also performed yielding rate constants for fiber dissolution. Fits for sample and cluster averages were variable, although demonstrating first-order kinetics for dissolution overall. To illustrate this process, two reference pulps (a bleached softwood kraft pulp and a bleached hardwood pre-hydrolysis kraft pulp) and their cellulase-treated versions were analyzed. As expected, differences in the kinetics and dissolution mechanisms between these samples were observed. Our initial interpretations are presented, based on the combined mechanistic observations and single fiber dissolution kinetics for these different samples. While the dissolution mechanisms observed were similar to those published previously, the more direct link of mechanistic information with the kinetics improve our understanding of cell wall structure and pre-treatments, toward improved processability.
Elimination kinetics of metals after an accidental exposure to welding fumes.
Schaller, Karl H; Csanady, György; Filser, Johannes; Jüngert, Barbara; Drexler, Hans
2007-07-01
We had the opportunity to study the kinetics of metals in blood and urine samples of a flame-sprayer exposed to high accident-prone workplace exposure. We measured over 1 year, the nickel, aluminium, and chromium concentrations in blood and urine specimens after exposure. On this basis, we evaluated the corresponding half-lives. Blood and urine sampling were carried out five times after accidental exposure over a period of 1 year. The metals were analysed by graphite furnace atomic absorption spectrometry and Zeeman compensation with reliable methods. Either a mono-exponential or a bi-exponential function was fitted to the concentration-time courses of selected metals using weighted least squares non-linear regression analysis. The amount excreted in urine was calculated integrating the urinary decay curve and multiplying with the daily creatinine excretion. The first examination was carried out 15 days after exposure. The mean aluminium concentration in plasma was 8.2 microg/l and in urine, 58.4 microg/g creatinine. The mean nickel concentration in blood was 59.6 microg/l and the excretion in urine 700 microg/g creatinine. The mean chromium level in blood was 1.4 microg/l in urine, 7.4 microg/g creatinine. For the three elements, the metal concentrations in blood and urine exceeded the reference values at least in the initial phase. For nickel, the German biological threshold limit values (EKA) were exceeded. Aluminium showed a mono-exponential decay, whereas the elimination of chromium and nickel was biphasic in biological fluids of the accidentally exposed welder. The half-lives were as follows: for aluminium 140 days (urine) and 160 days (plasma); for chromium 40 and 730 days (urine); for nickel 25 and 610 days (urine) as well as 30 and 240 days (blood). The renal clearance of aluminium and nickel was about 2 l/h estimated for the last monitoring day.
NASA Astrophysics Data System (ADS)
Ismail, A.; Hassan, Noor I.
2013-09-01
Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.
Synthesis and luminescence characterization of Pr3+ doped Sr1.5Ca0.5SiO4 phosphor
NASA Astrophysics Data System (ADS)
Vidyadharan, Viji; Mani, Kamal P.; Sajna, M. S.; Joseph, Cyriac; Unnikrishnan, N. V.; Biju, P. R.
2014-12-01
Luminescence properties of Pr3+ activated Sr1.5Ca0.5SiO4 phosphors synthesized by solid state reaction method are reported in this work. Blue, orange red and red emissions were observed in the Pr3+ doped sample under 444 nm excitation and these emissions are assigned as 3P0 → 3H4, 3P0 → 3H6 and 3P0 → 3F4 transitions. The emission intensity shows a maximum corresponding to the 0.5 wt% Pr3+ ion. The decay analysis was done for 0.05 and 0.5 wt% Pr3+ doped samples for the transition 3P0 → 3H6. The life times of 0.05 and 0.5 wt% Pr3+ doped samples were calculated by fitting to exponential and non-exponential curve respectively, and are found to be 156 and 105 μs respectively. The non-exponential behaviour arises due to the statistical distribution of the distances between the ground state Pr3+ ions and excited state Pr3+ ions, which cause the inhomogeneous energy transfer rate. The XRD spectrum confirmed the triclinic phase of the prepared phosphors. The compositions of the samples were determined by the energy dispersive X-ray spectra. From the SEM images it is observed that the particles are agglomerated and are irregularly shaped. IR absorption bands were assigned to different vibrational modes. The well resolved peaks shown in the absorption spectra are identical to the excitation spectra of the phosphor samples. Pr3+ activated Sr1.5Ca0.5SiO4 phosphors can be efficiently excited with 444 nm irradiation and emit multicolour visible emissions. From the CIE diagram it can be seen that the prepared phosphor samples give yellowish-green emission.
NASA Astrophysics Data System (ADS)
Schneider, Markus P. A.
This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.
CONTRIBUTIONS OF CHEMICAL EXCHANGE TO T1ρ DISPERSION IN A TISSUE MODEL
Cobb, Jared G.; Xie, Jingping; Gore, John C.
2015-01-01
Variations in T1ρ with locking-field strength (T1ρ dispersion) may be used to estimate proton exchange rates. We developed a novel approach utilizing the second derivative of the dispersion curve to measure exchange in a model system of cross-linked polyacrylamide gels. These gels were varied in relative composition of co-monomers, increasing stiffness, and in pH, modifying exchange rates. MR images were recorded with a spin-locking sequence as described by Sepponen et al. These measurements were fit to a mono-exponential decay function yielding values for T1ρ at each locking-field measured. These values were then fit to a model by Chopra et al. for estimating exchange rates. For low stiffness gels, the calculated exchange values increased by a factor of 4 as pH increased, consistent with chemical exchange being the dominant contributor to T1ρ dispersion. Interestingly, calculated chemical exchange rates also increased with stiffness, likely due to modified side-chain exchange kinetics as the composition varied. This paper demonstrates a new method to assess the structural and chemical effects on T1ρ relaxation dispersion with a suitable model. These phenomena may be exploited in an imaging context to emphasize the presence of nuclei of specific exchange rates, rather than chemical shifts. PMID:21590720
NASA Astrophysics Data System (ADS)
Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto
2018-01-01
Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.
Stellar Surface Brightness Profiles of Dwarf Galaxies
NASA Astrophysics Data System (ADS)
Herrmann, K. A.
2014-03-01
Radial stellar surface brightness profiles of spiral galaxies can be classified into three types: (I) single exponential, or the light falls off with one exponential out to a break radius and then falls off (II) more steeply (“truncated”), or (III) less steeply (“anti-truncated”). Why there are three different radial profile types is still a mystery, including why light falls off as an exponential at all. Profile breaks are also found in dwarf disks, but some dwarf Type IIs are flat or increasing (FI) out to a break before falling off. I have been re-examining the multi-wavelength stellar disk profiles of 141 dwarf galaxies, primarily from Hunter & Elmegreen (2004, 2006). Each dwarf has data in up to 11 wavelength bands: FUV and NUV from GALEX, UBVJHK and Hα from ground-based observations, and 3.6 and 4.5μm from Spitzer. Here I highlight some results from a semi-automatic fitting of this data set including: (1) statistics of break locations and other properties as a function of wavelength and profile type, (2) color trends and radial mass distribution as a function of profile type, and (3) the relationship of the break radius to the kinematics and density profiles of atomic hydrogen gas in the 40 dwarfs of the LITTLE THINGS subsample.