Sample records for estimates double exponential

  1. Dynamic modeling of sludge compaction and consolidation processes in wastewater secondary settling tanks.

    PubMed

    Abusam, A; Keesman, K J

    2009-01-01

    The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.

  2. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  3. Nuclear counting filter based on a centered Skellam test and a double exponential smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan

    2015-07-01

    Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less

  4. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant

    PubMed Central

    2013-01-01

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A.2012, 109, 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k0–1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation. PMID:24348206

  5. High-Resolution Free-Energy Landscape Analysis of α-Helical Protein Folding: HP35 and Its Double Mutant.

    PubMed

    Banushkina, Polina V; Krivov, Sergei V

    2013-12-10

    The free-energy landscape can provide a quantitative description of folding dynamics, if determined as a function of an optimally chosen reaction coordinate. Here, we construct the optimal coordinate and the associated free-energy profile for all-helical proteins HP35 and its norleucine (Nle/Nle) double mutant, based on realistic equilibrium folding simulations [Piana et al. Proc. Natl. Acad. Sci. U.S.A. 2012 , 109 , 17845]. From the obtained profiles, we directly determine such basic properties of folding dynamics as the configurations of the minima and transition states (TS), the formation of secondary structure and hydrophobic core during the folding process, the value of the pre-exponential factor and its relation to the transition path times, the relation between the autocorrelation times in TS and minima. We also present an investigation of the accuracy of the pre-exponential factor estimation based on the transition-path times. Four different estimations of the pre-exponential factor for both proteins give k 0 -1 values of approximately a few tens of nanoseconds. Our analysis gives detailed information about folding of the proteins and can serve as a rigorous common language for extensive comparison between experiment and simulation.

  6. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  7. Recursive least squares estimation and its application to shallow trench isolation

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.

    2003-06-01

    In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.

  8. Ammonium Removal from Aqueous Solutions by Clinoptilolite: Determination of Isotherm and Thermodynamic Parameters and Comparison of Kinetics by the Double Exponential Model and Conventional Kinetic Models

    PubMed Central

    Tosun, İsmail

    2012-01-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177

  9. Ammonium removal from aqueous solutions by clinoptilolite: determination of isotherm and thermodynamic parameters and comparison of kinetics by the double exponential model and conventional kinetic models.

    PubMed

    Tosun, Ismail

    2012-03-01

    The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.

  10. Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz

    NASA Astrophysics Data System (ADS)

    Nikitenko, Ya.

    2016-11-01

    To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.

  11. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  12. The mechanism of double-exponential growth in hyper-inflation

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  13. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  14. Wide variation of prostate-specific antigen doubling time of untreated, clinically localized, low-to-intermediate grade, prostate carcinoma.

    PubMed

    Choo, Richard; Klotz, Laurence; Deboer, Gerrit; Danjoux, Cyril; Morton, Gerard C

    2004-08-01

    To assess the prostate specific antigen (PSA) doubling time of untreated, clinically localized, low-to-intermediate grade prostate carcinoma. A prospective single-arm cohort study has been in progress since November 1995 to assess the feasibility of a watchful-observation protocol with selective delayed intervention for clinically localized, low-to-intermediate grade prostate adenocarcinoma. The PSA doubling time was estimated from a linear regression of ln(PSA) against time, assuming a simple exponential growth model. As of March 2003, 231 patients had at least 6 months of follow-up (median 45) and at least three PSA measurements (median 8, range 3-21). The distribution of the doubling time was: < 2 years, 26 patients; 2-5 years, 65; 5-10 years, 42; 10-20 years, 26; 20-50 years, 16; >50 years, 56. The median doubling time was 7.0 years; 42% of men had a doubling time of >10 years. The doubling time of untreated clinically localized, low-to-intermediate grade prostate cancer varies widely.

  15. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  16. Mortality table construction

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2015-12-01

    Mortality tables play important role in actuarial studies such as life annuities, premium determination, premium reserve, valuation pension plan, pension funding. Some known mortality tables are CSO mortality table, Indonesian Mortality Table, Bowers mortality table, Japan Mortality table. For actuary applications some tables are constructed with different environment such as single decrement, double decrement, and multiple decrement. There exist two approaches in mortality table construction : mathematics approach and statistical approach. Distribution model and estimation theory are the statistical concepts that are used in mortality table construction. This article aims to discuss the statistical approach in mortality table construction. The distributional assumptions are uniform death distribution (UDD) and constant force (exponential). Moment estimation and maximum likelihood are used to estimate the mortality parameter. Moment estimation methods are easier to manipulate compared to maximum likelihood estimation (mle). However, the complete mortality data are not used in moment estimation method. Maximum likelihood exploited all available information in mortality estimation. Some mle equations are complicated and solved using numerical methods. The article focus on single decrement estimation using moment and maximum likelihood estimation. Some extension to double decrement will introduced. Simple dataset will be used to illustrated the mortality estimation, and mortality table.

  17. The Secular Evolution Of Disc Galaxies And The Origin Of Exponential And Double Exponential Surface Density Profiles

    NASA Astrophysics Data System (ADS)

    Elmegreen, Bruce G.

    2016-10-01

    Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.

  18. On the modeling of breath-by-breath oxygen uptake kinetics at the onset of high-intensity exercises: simulated annealing vs. GRG2 method.

    PubMed

    Bernard, Olivier; Alata, Olivier; Francaux, Marc

    2006-03-01

    Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.

  19. Readout models for BaFBr0.85I0.15:Eu image plates

    NASA Astrophysics Data System (ADS)

    Stoeckl, M.; Solodov, A. A.

    2018-06-01

    The linearity of the photostimulated luminescence process makes repeated image-plate scanning a viable technique to extract a more dynamic range. In order to obtain a response estimate, two semi-empirical models for the readout fading of an image plate are introduced; they relate the depth distribution of activated photostimulated luminescence centers within an image plate to the recorded signal. Model parameters are estimated from image-plate scan series with BAS-MS image plates and the Typhoon FLA 7000 scanner for the hard x-ray image-plate diagnostic over a collection of experiments providing x-ray energy spectra whose approximate shape is a double exponential.

  20. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors.

    PubMed

    Peterson, Christine; Vannucci, Marina; Karakas, Cemal; Choi, William; Ma, Lihua; Maletić-Savatić, Mirjana

    2013-10-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation.

  1. Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors

    PubMed Central

    PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA

    2014-01-01

    Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172

  2. A new look at atmospheric carbon dioxide

    NASA Astrophysics Data System (ADS)

    Hofmann, David J.; Butler, James H.; Tans, Pieter P.

    Carbon dioxide is increasing in the atmosphere and is of considerable concern in global climate change because of its greenhouse gas warming potential. The rate of increase has accelerated since measurements began at Mauna Loa Observatory in 1958 where carbon dioxide increased from less than 1 part per million per year (ppm yr -1) prior to 1970 to more than 2 ppm yr -1 in recent years. Here we show that the anthropogenic component (atmospheric value reduced by the pre-industrial value of 280 ppm) of atmospheric carbon dioxide has been increasing exponentially with a doubling time of about 30 years since the beginning of the industrial revolution (˜1800). Even during the 1970s, when fossil fuel emissions dropped sharply in response to the "oil crisis" of 1973, the anthropogenic atmospheric carbon dioxide level continued increasing exponentially at Mauna Loa Observatory. Since the growth rate (time derivative) of an exponential has the same characteristic lifetime as the function itself, the carbon dioxide growth rate is also doubling at the same rate. This explains the observation that the linear growth rate of carbon dioxide has more than doubled in the past 40 years. The accelerating growth rate is simply the outcome of exponential growth in carbon dioxide with a nearly constant doubling time of about 30 years (about 2%/yr) and appears to have tracked human population since the pre-industrial era.

  3. First off-time treatment prostate-specific antigen kinetics predicts survival in intermittent androgen deprivation for prostate cancer.

    PubMed

    Sanchez-Salas, Rafael; Olivier, Fabien; Prapotnich, Dominique; Dancausa, José; Fhima, Mehdi; David, Stéphane; Secin, Fernando P; Ingels, Alexandre; Barret, Eric; Galiano, Marc; Rozet, François; Cathelineau, Xavier

    2016-01-01

    Prostate-specific antigen (PSA) doubling time is relying on an exponential kinetic pattern. This pattern has never been validated in the setting of intermittent androgen deprivation (IAD). Objective is to analyze the prognostic significance for PCa of recurrent patterns in PSA kinetics in patients undergoing IAD. A retrospective study was conducted on 377 patients treated with IAD. On-treatment period (ONTP) consisted of gonadotropin-releasing hormone agonist injections combined with oral androgen receptor antagonist. Off-treatment period (OFTP) began when PSA was lower than 4 ng/ml. ONTP resumed when PSA was higher than 20 ng/ml. PSA values of each OFTP were fitted with three basic patterns: exponential (PSA(t) = λ.e(αt)), linear (PSA(t) = a.t), and power law (PSA(t) = a.t(c)). Univariate and multivariate Cox regression model analyzed predictive factors for oncologic outcomes. Only 45% of the analyzed OFTPs were exponential. Linear and power law PSA kinetics represented 7.5% and 7.7%, respectively. Remaining fraction of analyzed OFTPs (40%) exhibited complex kinetics. Exponential PSA kinetics during the first OFTP was significantly associated with worse oncologic outcome. The estimated 10-year cancer-specific survival (CSS) was 46% for exponential versus 80% for nonexponential PSA kinetics patterns. The corresponding 10-year probability of castration-resistant prostate cancer (CRPC) was 69% and 31% for the two patterns, respectively. Limitations include retrospective design and mixed indications for IAD. PSA kinetic fitted with exponential pattern in approximately half of the OFTPs. First OFTP exponential PSA kinetic was associated with a shorter time to CRPC and worse CSS. © 2015 Wiley Periodicals, Inc.

  4. Rectal temperature-based death time estimation in infants.

    PubMed

    Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato

    2016-03-01

    In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Impact of inhomogeneity on SH-type wave propagation in an initially stressed composite structure

    NASA Astrophysics Data System (ADS)

    Saha, S.; Chattopadhyay, A.; Singh, A. K.

    2018-02-01

    The present analysis has been made on the influence of distinct form of inhomogeneity in a composite structure comprised of double superficial layers lying over a half-space, on the phase velocity of SH-type wave propagating through it. Propagation of SH-type wave in the said structure has been examined in four distinct cases of inhomogeneity viz. when inhomogeneity in double superficial layer is due to exponential variation in density only (Case I); when inhomogeneity in double superficial layers is due to exponential variation in rigidity only (Case II); when inhomogeneity in double superficial layer is due to exponential variation in rigidity, density and initial stress (Case III) and when inhomogeneity in double superficial layer is due to linear variation in rigidity, density and initial stress (Case IV). Closed-form expression of dispersion relation has been accomplished for all four aforementioned cases through extensive application of Debye asymptotic analysis. Deduced dispersion relations for all the cases are found in well-agreement to the classical Love-wave equation. Numerical computation has been carried out to graphically demonstrate the effect of inhomogeneity parameters, initial stress parameters as well as width ratio associated with double superficial layers in the composite structure for each of the four aforesaid cases on dispersion curve. Meticulous examination of distinct cases of inhomogeneity and initial stress in context of considered problem has been carried out with detailed analysis in a comparative approach.

  6. Understanding Exponential Growth: As Simple as a Drop in a Bucket.

    ERIC Educational Resources Information Center

    Goldberg, Fred; Shuman, James

    1984-01-01

    Provides procedures for a simple laboratory activity on exponential growth and its characteristic doubling time. The equipment needed consists of a large plastic bucket, an eyedropper, a stopwatch, an assortment of containers and graduated cylinders, and a supply of water. (JN)

  7. Double slip effects of Magnetohydrodynamic (MHD) boundary layer flow over an exponentially stretching sheet with radiation, heat source and chemical reaction

    NASA Astrophysics Data System (ADS)

    Shaharuz Zaman, Azmanira; Aziz, Ahmad Sukri Abd; Ali, Zaileha Md

    2017-09-01

    The double slips effect on the magnetohydrodynamic boundary layer flow over an exponentially stretching sheet with suction/blowing, radiation, chemical reaction and heat source is presented in this analysis. By using the similarity transformation, the governing partial differential equations of momentum, energy and concentration are transformed into the non-linear ordinary equations. These equations are solved using Runge-Kutta-Fehlberg method with shooting technique in MAPLE software environment. The effects of the various parameter on the velocity, temperature and concentration profiles are graphically presented and discussed.

  8. Cross-Conjugated Nanoarchitectures

    DTIC Science & Technology

    2013-08-23

    compounds were further evaluated by Lippert –Mataga analysis of the fluorescence solvatochromism and measurement of quantum yields and fluorescence...1.9 1.1 A(mP)2A Cy 0.49 5.5 0.90 0.93 D(Th)2D Cy 0.008 1.1 0.07 9 A(Th)2A Tol 0.014 2.1f 0.07 4.7 a Calculated from Lippert –Mataga plots for...Δfʹ region of the Lippert –Mataga plot. d Double exponential fit: τ1 = 21.5 ns (73%) and τ2 = 3.7 ns (27%). e Double exponential fit: τ1 = 0.85 ns

  9. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  10. Cyberinfrastructure for the NSF Ocean Observatories Initiative

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Vernon, F. L.; Arrott, M.; Chave, A.; Krueger, I.; Schofield, O.; Glenn, S.; Peach, C.; Nayak, A.

    2007-12-01

    The Internet today is vastly different than the Internet that we knew even five years ago and the changes that will be evident five years from now, when the NSF Ocean Observatories Initiative (OOI) prototype has been installed, are nearly unpredictable. Much of this progress is based on the exponential growth in capabilities of consumer electronics and information technology; the reality of this exponential behavior is rarely appreciated. For example, the number of transistors on a square cm of silicon will continue to double every 18 months, the density of disk storage will double every year, and network bandwidth will double every eight months. Today's desktop 2TB RAID will be 64TB and the 10Gbps Regional Scale Network fiber optical connection will be running at 1.8Tbps. The same exponential behavior characterizes the future of genome sequencing. The first two sequences of composites of individuals' genes cost tens of millions of dollars in 2001. Dr. Craig Venter just published a more accurate complete human genome (his own) at a cost on the order of 100,000. The J. Craig Venter Institute has provided support for the X Prize for Genomics offering 10M to the first successful sequencing of a human genome for $1,000. It's anticipated that the prize will be won within five years. Major advances in technology that are broadly viewed as disruptive or revolutionary rather than evolutionary will often depend upon the exploitation of exponential expansions in capability. Applications of these ideas to the OOI will be discussed. Specifically, the agile ability to scale cyberinfrastructure commensurate with the exponential growth of sensors, networks and computational capability and demand will be described.

  11. Characteristic length of the knotting probability revisited

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2015-09-01

    We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.

  12. Exponentially convergent state estimation for delayed switched recurrent neural networks.

    PubMed

    Ahn, Choon Ki

    2011-11-01

    This paper deals with the delay-dependent exponentially convergent state estimation problem for delayed switched neural networks. A set of delay-dependent criteria is derived under which the resulting estimation error system is exponentially stable. It is shown that the gain matrix of the proposed state estimator is characterised in terms of the solution to a set of linear matrix inequalities (LMIs), which can be checked readily by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed state estimator.

  13. Exhaustive Versus Randomized Searchers for Nonlinear Optimization in 21st Century Computing: Solar Application

    NASA Technical Reports Server (NTRS)

    Sen, Syamal K.; AliShaykhian, Gholam

    2010-01-01

    We present a simple multi-dimensional exhaustive search method to obtain, in a reasonable time, the optimal solution of a nonlinear programming problem. It is more relevant in the present day non-mainframe computing scenario where an estimated 95% computing resources remains unutilized and computing speed touches petaflops. While the processor speed is doubling every 18 months, the band width is doubling every 12 months, and the hard disk space is doubling every 9 months. A randomized search algorithm or, equivalently, an evolutionary search method is often used instead of an exhaustive search algorithm. The reason is that a randomized approach is usually polynomial-time, i.e., fast while an exhaustive search method is exponential-time i.e., slow. We discuss the increasing importance of exhaustive search in optimization with the steady increase of computing power for solving many real-world problems of reasonable size. We also discuss the computational error and complexity of the search algorithm focusing on the fact that no measuring device can usually measure a quantity with an accuracy greater than 0.005%. We stress the fact that the quality of solution of the exhaustive search - a deterministic method - is better than that of randomized search. In 21 st century computing environment, exhaustive search cannot be left aside as an untouchable and it is not always exponential. We also describe a possible application of these algorithms in improving the efficiency of solar cells - a real hot topic - in the current energy crisis. These algorithms could be excellent tools in the hands of experimentalists and could save not only large amount of time needed for experiments but also could validate the theory against experimental results fast.

  14. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  15. Characteristics of Double Exponentially Tapered Slot Antenna (DETSA) Conformed in the Longitudinal Direction Around a Cylindrical Structure

    NASA Technical Reports Server (NTRS)

    Ponchak, George E.; Jordan, Jennifer L.; Chevalier, Christine T.

    2006-01-01

    The characteristics of a double exponentially tapered slot antenna (DETSA) as a function of the radius that the DETSA is conformed to in the longitudinal direction is presented. It is shown through measurements and simulations that the radiation pattern of the conformed antenna rotates in the direction through which the antenna is curved, and that diffraction affects the radiation pattern if the radius of curvature is too small or the frequency too high. The gain of the antenna degrades by only 1 dB if the radius of curvature is large and more than 2 dB for smaller radii. The main effect due to curving the antenna is an increased cross-polarization in the E-plane.

  16. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  17. A note on large gauge transformations in double field theory

    DOE PAGES

    Naseer, Usman

    2015-06-03

    Here, we give a detailed proof of the conjecture by Hohm and Zwiebach in double field theory. Our result implies that their proposal for large gauge transformations in terms of the Jacobian matrix for coordinate transformations is, as required, equivalent to the standard exponential map associated with the generalized Lie derivative along a suitable parameter.

  18. Double closed-loop control of integrated optical resonance gyroscope with mean-square exponential stability.

    PubMed

    Li, Hui; Liu, Liying; Lin, Zhili; Wang, Qiwei; Wang, Xiao; Feng, Lishuang

    2018-01-22

    A new double closed-loop control system with mean-square exponential stability is firstly proposed to optimize the detection accuracy and dynamic response characteristic of the integrated optical resonance gyroscope (IORG). The influence mechanism of optical nonlinear effects on system detection sensitivity is investigated to optimize the demodulation gain, the maximum sensitivity and the linear work region of a gyro system. Especially, we analyze the effect of optical parameter fluctuation on the parameter uncertainty of system, and investigate the influence principle of laser locking-frequency noise on the closed-loop detection accuracy of angular velocity. The stochastic disturbance model of double closed-loop IORG is established that takes the unfavorable factors such as optical effect nonlinearity, disturbed disturbance, optical parameter fluctuation and unavoidable system noise into consideration. A robust control algorithm is also designed to guarantee the mean-square exponential stability of system with a prescribed H ∞ performance in order to improve the detection accuracy and dynamic performance of IORG. The conducted experiment results demonstrate that the IORG has a dynamic response time less than 76us, a long-term bias stability 7.04°/h with an integration time of 10s over one-hour test, and the corresponding bias stability 1.841°/h based on Allan deviation, which validate the effectiveness and usefulness of the proposed detection scheme.

  19. Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.

    PubMed

    van Elburg, Ronald A J; van Ooyen, Arjen

    2009-07-01

    An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.

  20. Numerical Calculation of the Spectrum of the Severe (1%) Lighting Current and Its First Derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C G; Ong, M M; Perkins, M P

    2010-02-12

    Recently, the direct-strike lighting environment for the stockpile-to-target sequence was updated [1]. In [1], the severe (1%) lightning current waveforms for first and subsequent return strokes are defined based on Heidler's waveform. This report presents numerical calculations of the spectra of those 1% lightning current waveforms and their first derivatives. First, the 1% lightning current models are repeated here for convenience. Then, the numerical method for calculating the spectra is presented and tested. The test uses a double-exponential waveform and its first derivative, which we fit to the previous 1% direct-strike lighting environment from [2]. Finally, the resulting spectra aremore » given and are compared with those of the double-exponential waveform and its first derivative.« less

  1. Doubling Time for Nonexponential Families of Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2010-01-01

    One special characteristic of any exponential growth or decay function f(t) = Ab[superscript t] is its unique doubling time or half-life, each of which depends only on the base "b". The half-life is used to characterize the rate of decay of any radioactive substance or the rate at which the level of a medication in the bloodstream decays as it is…

  2. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  3. Observational constraints on varying neutrino-mass cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.

    We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.

  4. Exponential Boundary Observers for Pressurized Water Pipe

    NASA Astrophysics Data System (ADS)

    Hermine Som, Idellette Judith; Cocquempot, Vincent; Aitouche, Abdel

    2015-11-01

    This paper deals with state estimation on a pressurized water pipe modeled by nonlinear coupled distributed hyperbolic equations for non-conservative laws with three known boundary measures. Our objective is to estimate the fourth boundary variable, which will be useful for leakage detection. Two approaches are studied. Firstly, the distributed hyperbolic equations are discretized through a finite-difference scheme. By using the Lipschitz property of the nonlinear term and a Lyapunov function, the exponential stability of the estimation error is proven by solving Linear Matrix Inequalities (LMIs). Secondly, the distributed hyperbolic system is preserved for state estimation. After state transformations, a Luenberger-like PDE boundary observer based on backstepping mathematical tools is proposed. An exponential Lyapunov function is used to prove the stability of the resulted estimation error. The performance of the two observers are shown on a water pipe prototype simulated example.

  5. Exponential propagators for the Schrödinger equation with a time-dependent potential.

    PubMed

    Bader, Philipp; Blanes, Sergio; Kopylov, Nikita

    2018-06-28

    We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.

  6. In vivo growth of 60 non-screening detected lung cancers: a computed tomography study.

    PubMed

    Mets, Onno M; Chung, Kaman; Zanen, Pieter; Scholten, Ernst T; Veldhuis, Wouter B; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia M; de Jong, Pim A

    2018-04-01

    Current pulmonary nodule management guidelines are based on nodule volume doubling time, which assumes exponential growth behaviour. However, this is a theory that has never been validated in vivo in the routine-care target population. This study evaluates growth patterns of untreated solid and subsolid lung cancers of various histologies in a non-screening setting.Growth behaviour of pathology-proven lung cancers from two academic centres that were imaged at least three times before diagnosis (n=60) was analysed using dedicated software. Random-intercept random-slope mixed-models analysis was applied to test which growth pattern most accurately described lung cancer growth. Individual growth curves were plotted per pathology subgroup and nodule type.We confirmed that growth in both subsolid and solid lung cancers is best explained by an exponential model. However, subsolid lesions generally progress slower than solid ones. Baseline lesion volume was not related to growth, indicating that smaller lesions do not grow slower compared to larger ones.By showing that lung cancer conforms to exponential growth we provide the first experimental basis in the routine-care setting for the assumption made in volume doubling time analysis. Copyright ©ERS 2018.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Šefl, Martin, E-mail: martin.sefl@gmail.com; Kyriakou, Ioanna; Emfietzoglou, Dimitris, E-mail: demfietz@cc.uoi.gr

    Purpose: To study theoretically the impact on cell survival of the radionuclide uptake rate inside tumor cells for a single administration of a radiopharmaceutical. Methods: The instantaneous-uptake model of O’Donoghue [“The impact of tumor cell proliferation in radioimmunotherapy,” Cancer 73, 974–980 (1994)] for a proliferating cell population irradiated by an exponentially decreasing dose-rate is here extended to allow for the monoexponential uptake of the radiopharmaceutical by the targeted cells. The time derivative of the survival curve is studied in detail deducing an expression for the minimum of the surviving fraction and the biologically effective dose (BED). Results: Surviving fractions aremore » calculated over a parameter range that is clinically relevant and broad enough to establish general trends. Specifically, results are presented for the therapy radionuclides Y-90, I-131, and P-32, assuming uptake half-times 1–24 h, extrapolated initial dose-rates 0.5–1 Gy h{sup −1}, and a biological clearance half-life of seven days. Representative radiobiological parameters for radiosensitive and rapidly proliferating tumor cells are used, with cell doubling time equal to 2 days and α-coefficient equal to 0.3 and 0.5 Gy{sup −1}. It is shown that neglecting the uptake phase of the radiopharmaceutical (i.e., assuming instantaneous-uptake) results in a sizeable over-estimation of cell-kill (i.e., under-estimation of cell survival) even for uptake half-times of only a few hours. The differences between the exponential-uptake model and the instantaneous-uptake model become larger for high peak dose-rates, slow uptakes, and (slightly) for long-lived radionuclides. Moreover, the sensitivity of the survival curve on the uptake model was found to be higher for the tumor cells with the larger α-coefficient. Conclusions: The exponential-uptake rate of the radiopharmaceutical inside targeted cells appears to have a considerable effect on the survival of a proliferating cell population and might need to be considered in radiobiological models of tumor cell-kill in radionuclide therapy.« less

  8. Line transect estimation of population size: the exponential case with grouped data

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1979-01-01

    Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.

  9. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  10. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  11. Difference in Dwarf Galaxy Surface Brightness Profiles as a Function of Environment

    NASA Astrophysics Data System (ADS)

    Lee, Youngdae; Park, Hong Soo; Kim, Sang Chul; Moon, Dae-Sik; Lee, Jae-Joon; Kim, Dong-Jin; Cha, Sang-Mok

    2018-05-01

    We investigate surface brightness profiles (SBPs) of dwarf galaxies in field, group, and cluster environments. With deep BV I images from the Korea Microlensing Telescope Network Supernova Program, SBPs of 38 dwarfs in the NGC 2784 group are fitted by a single-exponential or double-exponential model. We find that 53% of the dwarfs are fitted with single-exponential profiles (“Type I”), while 47% of the dwarfs show double-exponential profiles; 37% of all dwarfs have smaller sizes for the outer part than the inner part (“Type II”), while 10% have a larger outer than inner part (“Type III”). We compare these results with those in the field and in the Virgo cluster, where the SBP types of 102 field dwarfs are compiled from a previous study and the SBP types of 375 cluster dwarfs are measured using SDSS r-band images. As a result, the distributions of SBP types are different in the three environments. Common SBP types for the field, the NGC 2784 group, and the Virgo cluster are Type II, Type I and II, and Type I and III profiles, respectively. After comparing the sizes of dwarfs in different environments, we suggest that since the sizes of some dwarfs are changed due to environmental effects, SBP types are capable of being transformed and the distributions of SBP types in the three environments are different. We discuss possible environmental mechanisms for the transformation of SBP types. Based on data collected at KMTNet Telescopes and SDSS.

  12. NMR investigation of the short-chain ionic surfactant-water systems.

    PubMed

    Popova, M V; Tchernyshev, Y S; Michel, D

    2004-02-03

    The structure and dynamics of surfactant molecules [CH3(CH2)7COOK] in heavy water solutions were investigated by 1H and 2H NMR. A double-exponential attenuation of the spin-echo amplitude in a Carr-Purcell-Meiboom-Gill experiment was found. We expect correspondence to both bounded and monomeric states. At high concentrations in the NMR self-diffusion measurements also a double-exponential decay of the spin-echo signal versus the square of the dc magnetic gradient was observed. The slow component of the diffusion process is caused by micellar aggregates, while the fast component is the result of the self-diffusion of the monomers through the micelles. The self-diffusion studies indicate that the form of micelles changes with increasing total surfactant concentration. The critical temperature range for self-association is reflected in the 1H transverse relaxation.

  13. A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet.

    PubMed

    Liu, Yingxiang; Liu, Junkao; Chen, Weishan; Shi, Shengjun

    2012-05-01

    A U-shaped linear ultrasonic motor using longitudinal vibration transducers with double feet was proposed in this paper. The proposed motor contains a horizontal transducer and two vertical transducers. The horizontal transducer includes two exponential shape horns located at the leading ends, and each vertical transducer contains one exponential shape horn. The horns of the horizontal transducer and the vertical transducer intersect at the tip ends where the driving feet are located. Longitudinal vibrations are superimposed in the motor and generate elliptical motions at the driving feet. The two vibration modes of the motor are discussed, and the motion trajectories of driving feet are deduced. By adjusting the structural parameters, the resonance frequencies of two vibration modes were degenerated. A prototype motor was fabricated and measured. Typical output of the prototype is no-load speed of 854 mm/s and maximum thrust force of 40 N at a voltage of 200 V(rms).

  14. Using Differentials to Differentiate Trigonometric and Exponential Functions

    ERIC Educational Resources Information Center

    Dray, Tevian

    2013-01-01

    Starting from geometric definitions, we show how differentials can be used to differentiate trigonometric and exponential functions without limits, numerical estimates, solutions of differential equations, or integration.

  15. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  16. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  17. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  18. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  19. Effects of resonant magnetic perturbation on the triggering and the evolution of double-tearing mode

    NASA Astrophysics Data System (ADS)

    Wang, L.; Lin, W. B.; Wang, X. Q.

    2018-02-01

    The effects of resonant magnetic perturbation on the triggering and the evolution of the double-tearing mode are investigated by using nonlinear magnetohydrodynamics simulations in a slab geometry. It is found that the double-tearing mode can be destabilized by boundary magnetic perturbation. Moreover, the mode has three typical development stages before it reaches saturation: the linear stable stage, the linear-growth stage, and the exponential-growth stage. The onset and growth of the double-tearing mode significantly depend on the boundary magnetic perturbations, particularly in the early development stage of the mode. The influences of the magnetic perturbation amplitude on the mode for different separations of the two rational surfaces are also discussed.

  20. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  1. Investigation of the double exponential in the current-voltage characteristics of silicon solar cells. [proton irradiation effects on ATS 1 cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Noel, G. T.; Stirn, R. J.

    1977-01-01

    Difficulties in relating observed current-voltage characteristics of individual silicon solar cells to their physical and material parameters were underscored by the unexpected large changes in the current-voltage characteristics telemetered back from solar cells on the ATS-1 spacecraft during their first year in synchronous orbit. Depletion region recombination was studied in cells exhibiting a clear double-exponential dark characteristic by subjecting the cells to proton irradiation. A significant change in the saturation current, an effect included in the Sah, Noyce, Shockley formulation of diode current resulting from recombination in the depletion region, was caused by the introduction of shallow levels in the depletion region by the proton irradiation. This saturation current is not attributable only to diffusion current from outside the depletion region and only its temperature dependence can clarify its origin. The current associated with the introduction of deep-lying levels did not change significantly in these experiments.

  2. 15-digit accuracy calculations of Chandrasekhar's H-function for isotropic scattering by means of the double exponential formula

    NASA Astrophysics Data System (ADS)

    Kawabata, Kiyoshi

    2016-12-01

    This work shows that it is possible to calculate numerical values of the Chandrasekhar H-function for isotropic scattering at least with 15-digit accuracy by making use of the double exponential formula (DE-formula) of Takahashi and Mori (Publ. RIMS, Kyoto Univ. 9:721, 1974) instead of the Gauss-Legendre quadrature employed in the numerical scheme of Kawabata and Limaye (Astrophys. Space Sci. 332:365, 2011) and simultaneously taking a precautionary measure to minimize the effects due to loss of significant digits particularly in the cases of near-conservative scattering and/or errors involved in returned values of library functions supplied by compilers in use. The results of our calculations are presented for 18 selected values of single scattering albedo π0 and 22 values of an angular variable μ, the cosine of zenith angle θ specifying the direction of radiation incident on or emergent from semi-infinite media.

  3. First storage of ion beams in the Double Electrostatic Ion-Ring Experiment: DESIREE.

    PubMed

    Schmidt, H T; Thomas, R D; Gatchell, M; Rosén, S; Reinhed, P; Löfgren, P; Brännholm, L; Blom, M; Björkhage, M; Bäckström, E; Alexander, J D; Leontein, S; Hanstorp, D; Zettergren, H; Liljeby, L; Källberg, A; Simonsson, A; Hellberg, F; Mannervik, S; Larsson, M; Geppert, W D; Rensfelt, K G; Danared, H; Paál, A; Masuda, M; Halldén, P; Andler, G; Stockett, M H; Chen, T; Källersjö, G; Weimer, J; Hansen, K; Hartman, H; Cederquist, H

    2013-05-01

    We report on the first storage of ion beams in the Double ElectroStatic Ion Ring ExpEriment, DESIREE, at Stockholm University. We have produced beams of atomic carbon anions and small carbon anion molecules (C(n)(-), n = 1, 2, 3, 4) in a sputter ion source. The ion beams were accelerated to 10 keV kinetic energy and stored in an electrostatic ion storage ring enclosed in a vacuum chamber at 13 K. For 10 keV C2 (-) molecular anions we measure the residual-gas limited beam storage lifetime to be 448 s ± 18 s with two independent detector systems. Using the measured storage lifetimes we estimate that the residual gas pressure is in the 10(-14) mbar range. When high current ion beams are injected, the number of stored particles does not follow a single exponential decay law as would be expected for stored particles lost solely due to electron detachment in collision with the residual-gas. Instead, we observe a faster initial decay rate, which we ascribe to the effect of the space charge of the ion beam on the storage capacity.

  4. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  5. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  6. Exponential quantum spreading in a class of kicked rotor systems near high-order resonances

    NASA Astrophysics Data System (ADS)

    Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin

    2013-11-01

    Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.

  7. A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.

    DTIC Science & Technology

    1981-06-01

    Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that

  8. Rate of de novo mutations and the importance of father's age to disease risk.

    PubMed

    Kong, Augustine; Frigge, Michael L; Masson, Gisli; Besenbacher, Soren; Sulem, Patrick; Magnusson, Gisli; Gudjonsson, Sigurjon A; Sigurdsson, Asgeir; Jonasdottir, Aslaug; Jonasdottir, Adalbjorg; Wong, Wendy S W; Sigurdsson, Gunnar; Walters, G Bragi; Steinberg, Stacy; Helgason, Hannes; Thorleifsson, Gudmar; Gudbjartsson, Daniel F; Helgason, Agnar; Magnusson, Olafur Th; Thorsteinsdottir, Unnur; Stefansson, Kari

    2012-08-23

    Mutations generate sequence diversity and provide a substrate for selection. The rate of de novo mutations is therefore of major importance to evolution. Here we conduct a study of genome-wide mutation rates by sequencing the entire genomes of 78 Icelandic parent-offspring trios at high coverage. We show that in our samples, with an average father's age of 29.7, the average de novo mutation rate is 1.20 × 10(-8) per nucleotide per generation. Most notably, the diversity in mutation rate of single nucleotide polymorphisms is dominated by the age of the father at conception of the child. The effect is an increase of about two mutations per year. An exponential model estimates paternal mutations doubling every 16.5 years. After accounting for random Poisson variation, father's age is estimated to explain nearly all of the remaining variation in the de novo mutation counts. These observations shed light on the importance of the father's age on the risk of diseases such as schizophrenia and autism.

  9. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  10. Beyond Newton's law of cooling - estimation of time since death

    NASA Astrophysics Data System (ADS)

    Leinbach, Carl

    2011-09-01

    The estimate of the time since death and, thus, the time of death is strictly that, an estimate. However, the time of death can be an important piece of information in some coroner's cases, especially those that involve criminal or insurance investigations. It has been known almost from the beginning of time that bodies cool after the internal mechanisms such as circulation of the blood stop. A first attempt to link this phenomenon to the determination of the time of death used a crude linear relationship. Towards the end of the nineteenth century, Newton's law of cooling using body temperature data obtained by the coroner was used to make a more accurate estimate. While based on scientific principles and resulting in a better estimate, Newton's law does not really describe the cooling of a non-homogeneous human body. This article will discuss a more accurate model of the cooling process based on the theoretical work of Marshall and Hoare and the laboratory-based statistical work of Claus Henssge. Using DERIVE®6.10 and the statistical work of Henssge, the double exponential cooling formula developed by Marshall and Hoare will be explored. The end result is a tool that can be used in the field by coroner's scene investigators to determine a 95% confidence interval for the time since death and, thus, the time of death.

  11. An improved parameter estimation and comparison for soft tissue constitutive models containing an exponential function.

    PubMed

    Aggarwal, Ankush

    2017-08-01

    Motivated by the well-known result that stiffness of soft tissue is proportional to the stress, many of the constitutive laws for soft tissues contain an exponential function. In this work, we analyze properties of the exponential function and how it affects the estimation and comparison of elastic parameters for soft tissues. In particular, we find that as a consequence of the exponential function there are lines of high covariance in the elastic parameter space. As a result, one can have widely varying mechanical parameters defining the tissue stiffness but similar effective stress-strain responses. Drawing from elementary algebra, we propose simple changes in the norm and the parameter space, which significantly improve the convergence of parameter estimation and robustness in the presence of noise. More importantly, we demonstrate that these changes improve the conditioning of the problem and provide a more robust solution in the case of heterogeneous material by reducing the chances of getting trapped in a local minima. Based upon the new insight, we also propose a transformed parameter space which will allow for rational parameter comparison and avoid misleading conclusions regarding soft tissue mechanics.

  12. Investigation of the double exponential in the current-voltage characteristics of silicon solar cells

    NASA Technical Reports Server (NTRS)

    Wolf, M.; Noel, G. T.; Stirn, R. J.

    1976-01-01

    A theoretical analysis is presented of certain peculiarities of the current-voltage characteristics of silicon solar cells, involving high values of the empirical constant A in the diode equation for a p-n junction. An attempt was made in a lab experiment to demonstrate that the saturation current which is associated with the exponential term qV/A2kT of the I-V characteristic, with A2 roughly equal to 2, originates in the space charge region and that it can be increased, as observed on ATS-1 cells, by the introduction of additional defects through low energy proton irradiation. It was shown that the proton irradiation introduces defects into the space charge region which give rise to a recombination current from this region, although the I-V characteristic is, in this case, dominated by an exponential term which has A = 1.

  13. Recognizing Physisorption and Chemisorption in Carbon Nanotubes Gas Sensors by Double Exponential Fitting of the Response.

    PubMed

    Calvi, Andrea; Ferrari, Alberto; Sbuelz, Luca; Goldoni, Andrea; Modesti, Silvio

    2016-05-19

    Multi-walled carbon nanotubes (CNTs) have been grown in situ on a SiO 2 substrate and used as gas sensors. For this purpose, the voltage response of the CNTs as a function of time has been used to detect H 2 and CO 2 at various concentrations by supplying a constant current to the system. The analysis of both adsorptions and desorptions curves has revealed two different exponential behaviours for each curve. The study of the characteristic times, obtained from the fitting of the data, has allowed us to identify separately chemisorption and physisorption processes on the CNTs.

  14. Joint analysis of air pollution in street canyons in St. Petersburg and Copenhagen

    NASA Astrophysics Data System (ADS)

    Genikhovich, E. L.; Ziv, A. D.; Iakovleva, E. A.; Palmgren, F.; Berkowicz, R.

    The bi-annual data set of concentrations of several traffic-related air pollutants, measured continuously in street canyons in St. Petersburg and Copenhagen, is analysed jointly using different statistical techniques. Annual mean concentrations of NO 2, NO x and, especially, benzene are found systematically higher in St. Petersburg than in Copenhagen but for ozone the situation is opposite. In both cities probability distribution functions (PDFs) of concentrations and their daily or weekly extrema are fitted with the Weibull and double exponential distributions, respectively. Sample estimates of bi-variate distributions of concentrations, concentration roses, and probabilities of concentration of one pollutant being extreme given that another one reaches its extremum are presented in this paper as well as auto- and co-spectra. It is demonstrated that there is a reasonably high correlation between seasonally averaged concentrations of pollutants in St. Petersburg and Copenhagen.

  15. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  16. Isotope effect on blob-statistics in gyrofluid simulations of scrape-off layer turbulence

    NASA Astrophysics Data System (ADS)

    Meyer, O. H. H.; Kendl, A.

    2017-12-01

    In this contribution we apply a recently established stochastic model for scrape-off layer fluctuations to long time series obtained from gyrofluid simulations of fusion edge plasma turbulence. Characteristic parameters are estimated for different fusion relevant isotopic compositions (protium, deuterium, tritium and singly charged helium) by means of conditional averaging. It is shown that large amplitude fluctuations associated with radially propagating filaments in the scrape-off layer feature double-exponential wave-forms. We find increased pulse duration and longer waiting times between peaks for heavier ions, while the amplitudes are similar. The associated radial blob velocity is shown to be reduced for heavier ions. A parabolic relation between skewness and kurtosis of density fluctuations seems to be present. Improved particle confinement in terms of reduced mean value close to the outermost radial boundary and blob characteristics for heavier plasmas is presented.

  17. Critical care and the World Wide Web.

    PubMed

    Varon, J; Marik, P E

    1999-07-01

    The Internet was created in 1969, when the Advanced Research Projects Agency of the United States Department of Defense fired up an experimental network consisting of only four computers. Over the past five years there has been an exponential explosion in the number of computers added to this network. It is estimated that Internet traffic doubles every 100 days with more than 100 million people worldwide now on-line. The Internet is so vast that practically every aspect of human interest is represented is some form or fashion. From recreation to applied science and technology, and from Critical Care Medicine case scenarios to digitized radiology images and pathology specimens, the Internet has become increasingly useful for critical care practitioners. To date, no resource is better equipped to assist critical care providers in many of their daily tasks. This article presents some of the historical developments of the Internet as well as common applications that are useful for critical care practitioners.

  18. Estimation for coefficient of variation of an extension of the exponential distribution under type-II censoring scheme

    NASA Astrophysics Data System (ADS)

    Bakoban, Rana A.

    2017-08-01

    The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.

  19. Monte Carlo calculations of PET coincidence timing: single and double-ended readout

    PubMed Central

    Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W

    2016-01-01

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5 : Ce (LSO), LaBr3 : Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3 mm × 3 mm × 30 mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3 : Ce crystals 3 mm × 3 mm × 30 mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3 mm × 3 mm × 30 mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound. PMID:26350162

  20. Performance and state-space analyses of systems using Petri nets

    NASA Technical Reports Server (NTRS)

    Watson, James Francis, III

    1992-01-01

    The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.

  1. Controllability of a multichannel system

    NASA Astrophysics Data System (ADS)

    Ivanov, Sergei A.; Wang, Jun Min

    2018-02-01

    We consider the system consisting of K coupled acoustic channels with the different sound velocities cj. Channels are interacting at any point via the pressure and its time derivatives. Using the moment approach and the theory of exponential families with vector coefficients we establish two controllability results: the system is exactly controllable if (i) the control uj in the jth channel acts longer than the double travel time of a wave from the start to the end of the j-th channel; (ii) all controls uj act more than or equal to the maximal double travel time.

  2. Fundamentals of Tribology; Proceedings of the International Conference on the Fundamentals of Tribology held at The Massachusetts Institute of Technology, Cambridge, MA

    DTIC Science & Technology

    1978-06-01

    HDL). The locus of electrical centers of hydrated ions in contact with the electrode surface is known as the outer Helmholtz plane ( OHP ) while the...and then a more Crdual exponential decay in the diffuse double layer. The difference in potential between the OHP and the bulk electrolyte, i.e., the...rnntribution of the diffuse double layer, is called the electrokinetic or iC 275 (a) Wc IHP OHP GCL- BULK + + ELECTRO YTE + + + + +G+ + eS+ J f -A -A

  3. Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.

    PubMed

    Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K

    2012-06-01

    In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.

  4. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  5. Mathematical methods in biological dosimetry: the 1996 Iranian accident.

    PubMed

    Voisin, P; Assaei, R G; Heidary, A; Varzegar, R; Zakeri, F; Durand, V; Sorokine-Durm, I

    2000-11-01

    To report 18 months of cytogenetic follow-up for an Iranian worker accidentally overexposed to 192Ir, the mathematical extrapolation and comparison with clinical data. Unstable chromosome aberrations were measured using conventional cytogenetic tests by French and Iranian biological dosimetry laboratories on five occasions after the exposure. The decrease in dicentrics over time was analysed mathematically. In addition, Dolphin and Qdr extrapolations were applied to the data to check the exposure estimates. FISH determination of translocation yields was performed twice by the French laboratory and the results compared with the Dolphin and Qdr corrected values. Dose estimates based on dicentrics decreased from 3.1 +/- 0.4 Gy at 5 days after the accident to 0.8 +/- 0.2 Gy at 529 days. This could be fitted by double-exponential regression with an inflexion point between rapid and slow decrease of dicentrics after about 40 days. Dose estimates of 3.4 +/- 0.4 Gy for the Qdr model and 3.6 +/- 0.5 Gy for the Dolphin model were calculated during the post-exposure period and were remarkably stable. FISH translocation data at 26 and 61 days appeared consistent with the Dolphin and Qdr estimates. Dose correction by the Qdr and Dolphin models and translocation scoring appeared consistent with the clinical data and provided better information about the radiation injury than did crude estimates from dicentric scoring alone. Estimation by the Dolphin model of the irradiated fraction of the body seemed unreliable: it correlated better with the fraction of originally irradiated lymphocytes.

  6. A Nonlinear Least Squares Approach to Time of Death Estimation Via Body Cooling.

    PubMed

    Rodrigo, Marianito R

    2016-01-01

    The problem of time of death (TOD) estimation by body cooling is revisited by proposing a nonlinear least squares approach that takes as input a series of temperature readings only. Using a reformulation of the Marshall-Hoare double exponential formula and a technique for reducing the dimension of the state space, an error function that depends on the two cooling rates is constructed, with the aim of minimizing this function. Standard nonlinear optimization methods that are used to minimize the bivariate error function require an initial guess for these unknown rates. Hence, a systematic procedure based on the given temperature data is also proposed to determine an initial estimate for the rates. Then, an explicit formula for the TOD is given. Results of numerical simulations using both theoretical and experimental data are presented, both yielding reasonable estimates. The proposed procedure does not require knowledge of the temperature at death nor the body mass. In fact, the method allows the estimation of the temperature at death once the cooling rates and the TOD have been calculated. The procedure requires at least three temperature readings, although more measured readings could improve the estimates. With the aid of computerized recording and thermocouple detectors, temperature readings spaced 10-15 min apart, for example, can be taken. The formulas can be straightforwardly programmed and installed on a hand-held device for field use. © 2015 American Academy of Forensic Sciences.

  7. Is it growing exponentially fast? -- Impact of assuming exponential growth for characterizing and forecasting epidemics with initial near-exponential growth dynamics.

    PubMed

    Chowell, Gerardo; Viboud, Cécile

    2016-10-01

    The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.

  8. Exponential Increase in Relative Biological Effectiveness Along Distal Edge of a Proton Bragg Peak as Measured by Deoxyribonucleic Acid Double-Strand Breaks

    PubMed Central

    Cuaron, John J.; Chang, Chang; Lovelock, Michael; Higginson, Daniel S.; Mah, Dennis; Cahlon, Oren; Powell, Simon

    2016-01-01

    Purpose To quantify the relative biological effectiveness (RBE) of the distal edge of the proton Bragg peak, using an in vitro assay of DNA double-strand breaks (DSBs). Methods and Materials U2OS cells were irradiated within the plateau of a spread-out Bragg peak and at each millimeter position along the distal edge using a custom slide holder, allowing for simultaneous measurement of physical dose. A reference radiation signal was generated using photons. The DNA DSBs at 3 hours (to assess for early damage) and at 24 hours (to assess for residual damage and repair) after irradiation were measured using the γH2AX assay and quantified via flow cytometry. Results were confirmed with clonogenic survival assays. A detailed map of the RBE as a function of depth along the Bragg peak was generated using γH2AX measurements as a biological endpoint. Results At 3 hours after irradiation, DNA DSBs were higher with protons at every point along the distal edge compared with samples irradiated with photons to similar doses. This effect was even more pronounced after 24 hours, indicating that the impact of DNA repair is less after proton irradiation relative to photons. The RBE demonstrated an exponential increase as a function of depth and was measured to be as high as 4.0 after 3 hours and as high as 6.0 after 24 hours. When the RBE-corrected dose was plotted as a function of depth, the peak effective dose was extended 2-3 mm beyond what would be expected with physical measurement. Conclusions We generated a highly comprehensive map of the RBE of the distal edge of the Bragg peak, using a direct assay of DNA DSBs in vitro. Our data show that the RBE of the distal edge increases with depth and is significantly higher than previously reported estimates. PMID:27084629

  9. Exponential Increase in Relative Biological Effectiveness Along Distal Edge of a Proton Bragg Peak as Measured by Deoxyribonucleic Acid Double-Strand Breaks.

    PubMed

    Cuaron, John J; Chang, Chang; Lovelock, Michael; Higginson, Daniel S; Mah, Dennis; Cahlon, Oren; Powell, Simon

    2016-05-01

    To quantify the relative biological effectiveness (RBE) of the distal edge of the proton Bragg peak, using an in vitro assay of DNA double-strand breaks (DSBs). U2OS cells were irradiated within the plateau of a spread-out Bragg peak and at each millimeter position along the distal edge using a custom slide holder, allowing for simultaneous measurement of physical dose. A reference radiation signal was generated using photons. The DNA DSBs at 3 hours (to assess for early damage) and at 24 hours (to assess for residual damage and repair) after irradiation were measured using the γH2AX assay and quantified via flow cytometry. Results were confirmed with clonogenic survival assays. A detailed map of the RBE as a function of depth along the Bragg peak was generated using γH2AX measurements as a biological endpoint. At 3 hours after irradiation, DNA DSBs were higher with protons at every point along the distal edge compared with samples irradiated with photons to similar doses. This effect was even more pronounced after 24 hours, indicating that the impact of DNA repair is less after proton irradiation relative to photons. The RBE demonstrated an exponential increase as a function of depth and was measured to be as high as 4.0 after 3 hours and as high as 6.0 after 24 hours. When the RBE-corrected dose was plotted as a function of depth, the peak effective dose was extended 2-3 mm beyond what would be expected with physical measurement. We generated a highly comprehensive map of the RBE of the distal edge of the Bragg peak, using a direct assay of DNA DSBs in vitro. Our data show that the RBE of the distal edge increases with depth and is significantly higher than previously reported estimates. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  11. Wildfires in Siberian Mountain Forest

    NASA Astrophysics Data System (ADS)

    Kharuk, V.; Ponomarev, E. I.; Antamoshkina, O.

    2017-12-01

    The annual burned area in Russia was estimated as 0.55 to 20 Mha with >70% occurred in Siberia. We analyzed Siberian wildfires distribution with respect to elevation, slope steepness and exposure. In addition, wildfires temporal dynamic and latitudinal range were analyzed. We used daily thermal anomalies derived from NOAA/AVHRR and Terra/MODIS satellites (1990-2016). Fire return intervals were (FRI) calculated based on the dendrochronology analysis of samples taken from trees with burn marks. Spatial distribution of wildfires dependent on topo features: relative burned area increase with elevation increase (ca. 1100 m), switching to following decrease. The wildfires frequency exponentially decreased within lowlands - highlands transition. Burned area is increasing with slope steepness increase (up to 5-10°). Fire return intervals (FRI) on the southfacing slopes are about 30% longer than on the north facing. Wildfire re-occurrence is decreasing exponentially: 90% of burns were caused by single fires, 8.5% by double fires, 1% burned three times, and on about 0.05% territory wildfires occurred four times (observed period: 75 yr.). Wildfires area and number, as well as FRI, also dependent on latitude: relative burned area increasing exponentially in norward direction, whereas relative fire number is exponentially decreasing. FRI increases in the northward direction: from 80 years at 62°N to 200 years at the Arctic Circle, and to 300 years at the northern limit of closed forests ( 71+°N). Fire frequency, fire danger period and FRI are strongly correlated with incoming solar radiation (r = 0.81 - 0.95). In 21-s century, a positive trend of wildfires number and area observed in mountain areas in all Siberia. Thus, burned area and number of fires in Siberia are significantly increased since 1990th (R2 =0.47, R2 =0.69, respectively), and that increase correlated with air temperatures and climate aridity increases. However, wildfires are essential for supporting fire-resistant species (e.g., Larix sibirica, L, dahurica and Pinus silvestris) reforestation and completion with non-fire-resistant species. This work was supported by the Russian Foundation for Basic Research, the Government of the Krasnoyarsk krai, the Krasnoyarsk Fund for Support of Scientific and Technological Activities (N 17-41-240475)

  12. Computerized Method for the Generation of Molecular Transmittance Functions in the Infrared Region.

    DTIC Science & Technology

    1979-12-31

    exponent of the double exponential function were ’bumpy’ for some cases. Since the nature of the transmittance does not predict this behavior, we...T ,IS RECOMPUTED FOR THE ORIGIONAL DATA *USING THE PIECEWISE- ANALITICAL TRANSMISSION FUNCTION.’//20X, *’STANDARD DEVIATIONS BETWEEN THE ACTUAL TAU

  13. Skills for the Future.

    ERIC Educational Resources Information Center

    Smith, Gary R.

    This publication contains two miniunits to help students in grades 7-12 build skills for the future. The exercises can also be adapted for use in grades 4-6. Each of the miniunits contains several exercises to build specific skills. Miniunit One, "The Arithmetic of Growth," deals with two concepts--exponential growth and doubling time. These two…

  14. Global exponential stability of octonion-valued neural networks with leakage delay and mixed delays.

    PubMed

    Popa, Călin-Adrian

    2018-06-08

    This paper discusses octonion-valued neural networks (OVNNs) with leakage delay, time-varying delays, and distributed delays, for which the states, weights, and activation functions belong to the normed division algebra of octonions. The octonion algebra is a nonassociative and noncommutative generalization of the complex and quaternion algebras, but does not belong to the category of Clifford algebras, which are associative. In order to avoid the nonassociativity of the octonion algebra and also the noncommutativity of the quaternion algebra, the Cayley-Dickson construction is used to decompose the OVNNs into 4 complex-valued systems. By using appropriate Lyapunov-Krasovskii functionals, with double and triple integral terms, the free weighting matrix method, and simple and double integral Jensen inequalities, delay-dependent criteria are established for the exponential stability of the considered OVNNs. The criteria are given in terms of complex-valued linear matrix inequalities, for two types of Lipschitz conditions which are assumed to be satisfied by the octonion-valued activation functions. Finally, two numerical examples illustrate the feasibility, effectiveness, and correctness of the theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  16. Ray-theory approach to electrical-double-layer interactions.

    PubMed

    Schnitzer, Ory

    2015-02-01

    A novel approach is presented for analyzing the double-layer interaction force between charged particles in electrolyte solution, in the limit where the Debye length is small compared with both interparticle separation and particle size. The method, developed here for two planar convex particles of otherwise arbitrary geometry, yields a simple asymptotic approximation limited to neither small zeta potentials nor the "close-proximity" assumption underlying Derjaguin's approximation. Starting from the nonlinear Poisson-Boltzmann formulation, boundary-layer solutions describing the thin diffuse-charge layers are asymptotically matched to a WKBJ expansion valid in the bulk, where the potential is exponentially small. The latter expansion describes the bulk potential as superposed contributions conveyed by "rays" emanating normally from the boundary layers. On a special curve generated by the centers of all circles maximally inscribed between the two particles, the bulk stress-associated with the ray contributions interacting nonlinearly-decays exponentially with distance from the center of the smallest of these circles. The force is then obtained by integrating the traction along this curve using Laplace's method. We illustrate the usefulness of our theory by comparing it, alongside Derjaguin's approximation, with numerical simulations in the case of two parallel cylinders at low potentials. By combining our result and Derjaguin's approximation, the interaction force is provided at arbitrary interparticle separations. Our theory can be generalized to arbitrary three-dimensional geometries, nonideal electrolyte models, and other physical scenarios where exponentially decaying fields give rise to forces.

  17. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  18. Lambert-Beer law in ocean waters: optical properties of water and of dissolved/suspended material, optical energy budgets.

    PubMed

    Stavn, R H

    1988-01-15

    The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.

  19. Force Measurements of Single and Double Barrier DBD Plasma Actuators in Quiescent Air

    NASA Technical Reports Server (NTRS)

    Hoskinson, Alan R.; Hershkowitz, Noah; Ashpis, David E.

    2008-01-01

    We have performed measurements of the force induced by both single (one electrode insulated) and double (both electrodes insulated) dielectric barrier discharge plasma actuators in quiescent air. We have shown that, for single barrier actuators, as the electrode diameter decreased below those values previously studied the induced Force increases exponentially rather than linearly. This behavior has been experimentally verified using two different measurement techniques: stagnation probe measurements of the induced flow velocity and direct measurement of the force using an electronic balance. In addition, we have shown the the induced force is independent of the material used for the exposed electrode. The same techniques have shown that the induced force of a double barrier actuator increases with decreasing narrow electrode diameter.

  20. U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring.

    PubMed

    Zhong, Nianbing; Zhao, Mingfu; Li, Yishan

    2016-02-01

    To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes' Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide-silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0-536 μm.

  1. Full-subcarriers Polar-OFDM for optical spectrum-efficient transmission exploiting Polarization Multiplexing

    NASA Astrophysics Data System (ADS)

    Guo, Lei; Liu, Yejun; Zhou, Yufang; Wei, Xuetao; Liu, Yuying

    2018-07-01

    The exponential growth of the demand for broadband services has imposed great challenges on the design of spectrum-efficient optical transmission system in Passive Optical Network (PON). Recently, an innovative Orthogonal Frequency Division Multiplexing (OFDM) scheme, called Polar-OFDM (P-OFDM), has emerged as a promising solution to boost the spectral efficiency of optical transmission in PON. However, the traditional P-OFDM does not yet perform best in spectral efficiency as it only uses half of the total subcarriers. In this paper, we verify a promising complementation between Polarization Multiplexing (POLMUX) and P-OFDM aiming at higher spectral efficiency. We then propose the full-subcarriers P-OFDM by loading data on the even-indexed subcarriers of X polarization and the odd-indexed subcarriers of Y polarization, respectively. Thus, all of the subcarriers will be utilized for effective data transmission, which can double the spectral efficiency. More importantly, because the subcarriers are interlaced on different polarizations, the cross-polarization interference can be significantly mitigated, which enables the independent channel estimation and equalization at the receiver to recover the data carried on each polarization. Our evaluation results demonstrate that the proposed system realizes the double spectral efficiency of the traditional P-OFDM with reasonable Bit Error Rate (BER) performance loss.

  2. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement

    PubMed Central

    Gustman, Alan L.; Steinmeier, Thomas L.

    2012-01-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946

  3. Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.

    PubMed

    Gustman, Alan L; Steinmeier, Thomas L

    2012-06-01

    This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.

  4. A gamma variate model that includes stretched exponential is a better fit for gastric emptying data from mice

    PubMed Central

    Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.

    2015-01-01

    Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615

  5. Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films

    NASA Astrophysics Data System (ADS)

    Wan, Chang Jin; Zhu, Li Qiang; Wan, Xiang; Shi, Yi; Wan, Qing

    2016-01-01

    The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.

  6. Organic/inorganic hybrid synaptic transistors gated by proton conducting methylcellulose films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Chang Jin; Wan, Qing, E-mail: wanqing@nju.edu.cn, E-mail: yshi@nju.edu.cn; Ningbo Institute of Material Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201

    The idea of building a brain-inspired cognitive system has been around for several decades. Recently, electric-double-layer transistors gated by ion conducting electrolytes were reported as the promising candidates for synaptic electronics and neuromorphic system. In this letter, indium-zinc-oxide transistors gated by proton conducting methylcellulose electrolyte films were experimentally demonstrated with synaptic plasticity including paired-pulse facilitation and spatiotemporal-correlated dynamic logic. More importantly, a model based on proton-related electric-double-layer modulation and stretched-exponential decay function was proposed, and the theoretical results are in good agreement with the experimentally measured synaptic behaviors.

  7. Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS

    USDA-ARS?s Scientific Manuscript database

    his study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM). Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA’s long lasting AMSR-E mission. Additionally three other products we...

  8. Local perturbations perturb—exponentially-locally

    NASA Astrophysics Data System (ADS)

    De Roeck, W.; Schütz, M.

    2015-06-01

    We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.

  9. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  10. Historical Patterns of Change: The Lessons of the 1980s.

    ERIC Educational Resources Information Center

    Geiger, Roger L.

    This paper seeks to assess the current state of academic research in light of long-term trends in the development of science. It presents three perspectives on the growth of scientific research: (1) Derek de Solla Price's (1963) hypothesis that science has exhibited exponential growth, roughly doubling every 15 years since the 17th century; (2)…

  11. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  12. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  13. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  14. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  15. U-shaped, double-tapered, fiber-optic sensor for effective biofilm growth monitoring

    PubMed Central

    Zhong, Nianbing; Zhao, Mingfu; Li, Yishan

    2016-01-01

    To monitor biofilm growth on polydimethylsiloxane in a photobioreactor effectively, the biofilm cells and liquids were separated and measured using a sensor with two U-shaped, double-tapered, fiber-optic probes (Sen. and Ref. probes). The probes’ Au-coated hemispherical tips enabled double-pass evanescent field absorption. The Sen. probe sensed the cells and liquids inside the biofilm. The polyimide–silica hybrid-film-coated Ref. probe separated the liquids from the biofilm cells and analyzed the liquid concentration. The biofilm structure and active biomass were also examined to confirm the effectiveness of the measurement using a simulation model. The sensor was found to effectively respond to the biofilm growth in the adsorption through exponential phases at thicknesses of 0–536 μm. PMID:26977344

  16. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  17. A hybrid MD-kMC algorithm for folding proteins in explicit solvent.

    PubMed

    Peter, Emanuel Karl; Shea, Joan-Emma

    2014-04-14

    We present a novel hybrid MD-kMC algorithm that is capable of efficiently folding proteins in explicit solvent. We apply this algorithm to the folding of a small protein, Trp-Cage. Different kMC move sets that capture different possible rate limiting steps are implemented. The first uses secondary structure formation as a relevant rate event (a combination of dihedral rotations and hydrogen-bonding formation and breakage). The second uses tertiary structure formation events through formation of contacts via translational moves. Both methods fold the protein, but via different mechanisms and with different folding kinetics. The first method leads to folding via a structured helical state, with kinetics fit by a single exponential. The second method leads to folding via a collapsed loop, with kinetics poorly fit by single or double exponentials. In both cases, folding times are faster than experimentally reported values, The secondary and tertiary move sets are integrated in a third MD-kMC implementation, which now leads to folding of the protein via both pathways, with single and double-exponential fits to the rates, and to folding rates in good agreement with experimental values. The competition between secondary and tertiary structure leads to a longer search for the helix-rich intermediate in the case of the first pathway, and to the emergence of a kinetically trapped long-lived molten-globule collapsed state in the case of the second pathway. The algorithm presented not only captures experimentally observed folding intermediates and kinetics, but yields insights into the relative roles of local and global interactions in determining folding mechanisms and rates.

  18. Characteristics of a Linearly Tapered Slot Antenna (LTSA) Conformed Longitudinally Around a Cylinder

    NASA Technical Reports Server (NTRS)

    Jordan, Jennifer L.; Ponchak, George E.; Tavassolian, Negar; Tentzeris, Manos M.

    2007-01-01

    The family of tapered slot antennas (TSA s) is suitable for numerous applications. Their ease of fabrication, wide bandwidth, and high gain make them desirable for military and commercial systems. Fabrication on thin, flexible substrates allows the TSA to be conformed over a given body, such as an aircraft wing or a piece of clothing for wearable networks. Previously, a Double Exponentially Tapered Slot Antenna (DETSA) was conformed around an exponential curvature, which showed that the main beam skewed towards the direction of curvature. This paper presents a Linearly Tapered Slot Antenna (LTSA) conformed longitudinally around a cylinder. Measured and simulated radiation patterns and the direction of maximum H co-polarization (Hco) as a function of the cylinder radius are presented.

  19. Monte Carlo calculations of PET coincidence timing: single and double-ended readout

    NASA Astrophysics Data System (ADS)

    Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.

    2015-09-01

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5 : Ce (LSO), LaBr3 : Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3 mm  ×  3 mm  ×  30 mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3 : Ce crystals 3 mm  ×  3 mm  ×  30 mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3 mm  ×  3 mm  ×  30 mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound.

  20. Monte Carlo calculations of PET coincidence timing: single and double-ended readout.

    PubMed

    Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W

    2015-09-21

    We present Monte Carlo computational methods for estimating the coincidence resolving time (CRT) of scintillator detector pairs in positron emission tomography (PET) and present results for Lu2SiO5 : Ce (LSO), LaBr3 : Ce, and a hypothetical ultra-fast scintillator with a 1 ns decay time. The calculations were applied to both single-ended and double-ended photodetector readout with constant-fraction triggering. They explicitly include (1) the intrinsic scintillator properties (luminosity, rise time, decay time, and index of refraction), (2) the exponentially distributed depths of interaction, (3) the optical photon transport efficiency, delay, and time dispersion, (4) the photodetector properties (fill factor, quantum efficiency, transit time jitter, and single electron response), and (5) the determination of the constant fraction trigger level that minimizes the CRT. The calculations for single-ended readout include the delayed photons from the opposite reflective surface. The calculations for double-ended readout include (1) the simple average of the two photodetector trigger times, (2) more accurate estimators of the annihilation photon entrance time using the pulse height ratio to estimate the depth of interaction and correct for annihilation photon, optical photon, and trigger delays, and (3) the statistical lower bound for interactions at the center of the crystal. For time-of-flight (TOF) PET we combine stopping power and TOF information in a figure of merit equal to the sensitivity gain relative to whole-body non-TOF PET using LSO. For LSO crystals 3 mm  ×  3 mm  ×  30 mm, a decay time of 37 ns, a total photoelectron count of 4000, and a photodetector with 0.2 ns full-width at half-maximum (fwhm) timing jitter, single-ended readout has a CRT of 0.16 ns fwhm and double-ended readout has a CRT of 0.111 ns fwhm. For LaBr3 : Ce crystals 3 mm  ×  3 mm  ×  30 mm, a rise time of 0.2 ns, a decay time of 18 ns, and a total of 7600 photoelectrons the CRT numbers are 0.14 ns and 0.072 ns fwhm, respectively. For a hypothetical ultra-fast scintillator 3 mm  ×  3 mm  ×  30 mm, a decay time of 1 ns, and a total of 4000 photoelectrons, the CRT numbers are 0.070 and 0.020 ns fwhm, respectively. Over a range of examples, values for double-ended readout are about 10% larger than the statistical lower bound.

  1. Geometry of the q-exponential distribution with dependent competing risks and accelerated life testing

    NASA Astrophysics Data System (ADS)

    Zhang, Fode; Shi, Yimin; Wang, Ruibing

    2017-02-01

    In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).

  2. Recurrence intervals for the closure of the Dutch Maeslant surge barrier

    NASA Astrophysics Data System (ADS)

    van den Brink, Henk W.; de Goederen, Sacha

    2017-09-01

    The Dutch Maeslant Barrier, a movable surge barrier in the mouth of the river Rhine, closes when there is a surge in the North Sea and the water level in the river at Rotterdam exceeds 3 m above mean sea level. An important aspect of the failure probability is that the barrier might get damaged during a closure and that, within the time needed for repair, a second critical storm surge may occur. With an estimated closure frequency of once in 10 years, the question of how often the barrier has to be closed twice within one month arises.Instead of tackling this problem by the application of statistical models on the (short) observational series, we solve the problem by combining the surge model WAQUA/DCSMv5 with the output of all seasonal forecasts of the European Centre of Medium-Range Weather Forecasting (ECMWF) in the period 1981-2015, whose combination cumulates in a pseudo-observational series of more than 6000 years.We show that the Poisson process model leads to wrong results as it neglects the temporal correlations that are present on daily, weekly and monthly timescales.By counting the number of double events over a threshold of 2.5 m and assuming that the number of events is exponentially related to the threshold, it is found that two closures occur on average once in 150 years within a month, and once in 330 years within a week. The large uncertainty in these recurrence intervals of more than a factor of two is caused by the sensitivity of the results to the Gumbel parameters of the observed record, which are used for bias correction.Sea level rise has a significant impact on the recurrence time for both single and double closures. The recurrence time of single closures doubles with every 18 cm mean sea level rise (assuming that other influences remain unchanged) and double closures double with every 10 cm rise. This implies a 3-14 times higher probability of a double closure for a 15-40 cm sea level rise in 2050 (according to the KNMI climate scenarios).

  3. A coupled cluster theory with iterative inclusion of triple excitations and associated equation of motion formulation for excitation energy and ionization potential

    NASA Astrophysics Data System (ADS)

    Maitra, Rahul; Akinaga, Yoshinobu; Nakajima, Takahito

    2017-08-01

    A single reference coupled cluster theory that is capable of including the effect of connected triple excitations has been developed and implemented. This is achieved by regrouping the terms appearing in perturbation theory and parametrizing through two different sets of exponential operators: while one of the exponentials, involving general substitution operators, annihilates the ground state but has a non-vanishing effect when it acts on the excited determinant, the other is the regular single and double excitation operator in the sense of conventional coupled cluster theory, which acts on the Hartree-Fock ground state. The two sets of operators are solved as coupled non-linear equations in an iterative manner without significant increase in computational cost than the conventional coupled cluster theory with singles and doubles excitations. A number of physically motivated and computationally advantageous sufficiency conditions are invoked to arrive at the working equations and have been applied to determine the ground state energies of a number of small prototypical systems having weak multi-reference character. With the knowledge of the correlated ground state, we have reconstructed the triple excitation operator and have performed equation of motion with coupled cluster singles, doubles, and triples to obtain the ionization potential and excitation energies of these molecules as well. Our results suggest that this is quite a reasonable scheme to capture the effect of connected triple excitations as long as the ground state remains weakly multi-reference.

  4. Bridging the Gap between Curriculum Planning Policies and Pre-Service Teachers' Needs

    ERIC Educational Resources Information Center

    Castro-Garces, Angela Yicely; Arboleda, Argemiro Arboleda

    2017-01-01

    The challenge and satisfaction of being a teacher is doubled when one has the precious task of being a teacher trainer, as our practices replicate exponentially, touching the lives of people we do not even get to meet. Accordingly, this article presents the analysis of a process that brought tensions to a teacher training program because of the…

  5. The Exponential Growth of Mathematics and Technology at the University of Portsmouth

    ERIC Educational Resources Information Center

    McCabe, Michael

    2009-01-01

    The number of students studying university mathematics in the UK has been increasing gradually and linearly since 2002. At the University of Portsmouth, number of students studying mathematics doubled from 30 to 60 between 2002 and 2007, then increased by 240% in just 1 year to over 140 in 2008. This article explains how learning technology has…

  6. DOUBLE-EXPONENTIAL FITTING FUNCTION FOR EVALUATION OF COSMIC-RAY-INDUCED NEUTRON FLUENCE RATE IN ARBITRARY LOCATIONS.

    PubMed

    Li, Huailiang; Yang, Yigang; Wang, Qibiao; Tuo, Xianguo; Julian Henderson, Mark; Courtois, Jérémie

    2017-12-01

    The fluence rate of cosmic-ray-induced neutrons (CRINs) varies with many environmental factors. While many current simulation and experimental studies have focused mainly on the altitude variation, the specific rule that the CRINs vary with geomagnetic cutoff rigidity (which is related to latitude and longitude) was not well considered. In this article, a double-exponential fitting function F=(A1e-A2CR+A3)eB1Al, is proposed to evaluate the CRINs' fluence rate varying with geomagnetic cutoff rigidity and altitude. The fitting R2 can have a value up to 0.9954, and, moreover, the CRINs' fluence rate in an arbitrary location (latitude, longitude and altitude) can be easily evaluated by the proposed function. The field measurements of the CRINs' fluence rate and H*(10) rate in Mt. Emei and Mt. Bowa were carried out using a FHT-762 and LB 6411 neutron prober, respectively, and the evaluation results show that the fitting function agrees well with the measurement results. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  8. On new non-modal hydrodynamic stability modes and resulting non-exponential growth rates - a Lie symmetry approach

    NASA Astrophysics Data System (ADS)

    Oberlack, Martin; Nold, Andreas; Sanjon, Cedric Wilfried; Wang, Yongqi; Hau, Jan

    2016-11-01

    Classical hydrodynamic stability theory for laminar shear flows, no matter if considering long-term stability or transient growth, is based on the normal-mode ansatz, or, in other words, on an exponential function in space (stream-wise direction) and time. Recently, it became clear that the normal mode ansatz and the resulting Orr-Sommerfeld equation is based on essentially three fundamental symmetries of the linearized Euler and Navier-Stokes equations: translation in space and time and scaling of the dependent variable. Further, Kelvin-mode of linear shear flows seemed to be an exception in this context as it admits a fourth symmetry resulting in the classical Kelvin mode which is rather different from normal-mode. However, very recently it was discovered that most of the classical canonical shear flows such as linear shear, Couette, plane and round Poiseuille, Taylor-Couette, Lamb-Ossen vortex or asymptotic suction boundary layer admit more symmetries. This, in turn, led to new problem specific non-modal ansatz functions. In contrast to the exponential growth rate in time of the modal-ansatz, the new non-modal ansatz functions usually lead to an algebraic growth or decay rate, while for the asymptotic suction boundary layer a double-exponential growth or decay is observed.

  9. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  10. On the hardness of high carbon ferrous martensite

    NASA Astrophysics Data System (ADS)

    Mola, J.; Ren, M.

    2018-06-01

    Due to the presence of retained austenite in martensitic steels, especially steels with high carbon concentrations, it is difficult to estimate the hardness of martensite independent of the hardness of the coexisting austenite. In the present work, the hardness of ferrous martensite with carbon concentrations in the range 0.23-1.46 mass-% was estimated by the regression analysis of hardnesses for hardened martensitic-austenitic steels containing various martensite fractions. For a given carbon concentration, the hardness of martensitic-austenitic steels was found to increase exponentially with an increase in the fraction of the martensitic constituent. The hardness of the martensitic constituent was subsequently estimated by the exponential extrapolation of the hardness of phase mixtures to 100 vol.% martensite. For martensite containing 1.46 mass-% carbon, the hardness was estimated to be 1791 HV. This estimate of martensite hardness is significantly higher than the experimental hardness of 822 HV for a phase mixture of 68 vol.% martensite and 32 vol.% austenite. The hardness obtained by exponential extrapolation is also much higher than the hardness of 1104 HV based on the rule of mixtures. The underestimated hardness of high carbon martensite in the presence of austenite is due to the non-linear dependence of hardness on the martensite fraction. The latter is also a common observation in composite materials with a soft matrix and hard reinforcing particles.

  11. An organizational metamodel for hospital emergency departments.

    PubMed

    Kaptan, Kubilay

    2014-10-01

    I introduce an organizational model describing the response of the hospital emergency department. The hybrid simulation/analytical model (called a "metamodel") can estimate a hospital's capacity and dynamic response in real time and incorporate the influence of damage to structural and nonstructural components on the organizational ones. The waiting time is the main parameter of response and is used to evaluate the disaster resilience of health care facilities. Waiting time behavior is described by using a double exponential function and its parameters are calibrated based on simulated data. The metamodel covers a large range of hospital configurations and takes into account hospital resources in terms of staff and infrastructures, operational efficiency, and the possible existence of an emergency plan; maximum capacity; and behavior both in saturated and overcapacitated conditions. The sensitivity of the model to different arrival rates, hospital configurations, and capacities and the technical and organizational policies applied during and before a disaster were investigated. This model becomes an important tool in the decision process either for the engineering profession or for policy makers.

  12. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  13. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  14. A comparison of modelling techniques used to characterise oxygen uptake kinetics during the on-transient of exercise.

    PubMed

    Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A

    2001-09-01

    We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.

  15. Eruption probabilities for the Lassen Volcanic Center and regional volcanism, northern California, and probabilities for large explosive eruptions in the Cascade Range

    USGS Publications Warehouse

    Nathenson, Manuel; Clynne, Michael A.; Muffler, L.J. Patrick

    2012-01-01

    Chronologies for eruptive activity of the Lassen Volcanic Center and for eruptions from the regional mafic vents in the surrounding area of the Lassen segment of the Cascade Range are here used to estimate probabilities of future eruptions. For the regional mafic volcanism, the ages of many vents are known only within broad ranges, and two models are developed that should bracket the actual eruptive ages. These chronologies are used with exponential, Weibull, and mixed-exponential probability distributions to match the data for time intervals between eruptions. For the Lassen Volcanic Center, the probability of an eruption in the next year is 1.4x10-4 for the exponential distribution and 2.3x10-4 for the mixed exponential distribution. For the regional mafic vents, the exponential distribution gives a probability of an eruption in the next year of 6.5x10-4, but the mixed exponential distribution indicates that the current probability, 12,000 years after the last event, could be significantly lower. For the exponential distribution, the highest probability is for an eruption from a regional mafic vent. Data on areas and volumes of lava flows and domes of the Lassen Volcanic Center and of eruptions from the regional mafic vents provide constraints on the probable sizes of future eruptions. Probabilities of lava-flow coverage are similar for the Lassen Volcanic Center and for regional mafic vents, whereas the probable eruptive volumes for the mafic vents are generally smaller. Data have been compiled for large explosive eruptions (>≈ 5 km3 in deposit volume) in the Cascade Range during the past 1.2 m.y. in order to estimate probabilities of eruption. For erupted volumes >≈5 km3, the rate of occurrence since 13.6 ka is much higher than for the entire period, and we use these data to calculate the annual probability of a large eruption at 4.6x10-4. For erupted volumes ≥10 km3, the rate of occurrence has been reasonably constant from 630 ka to the present, giving more confidence in the estimate, and we use those data to calculate the annual probability of a large eruption in the next year at 1.4x10-5.

  16. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  17. Transfer potentials shape and equilibrate monetary systems

    NASA Astrophysics Data System (ADS)

    Fischer, Robert; Braun, Dieter

    2003-04-01

    We analyze a monetary system of random money transfer on the basis of double entry bookkeeping. Without boundary conditions, we do not reach a price equilibrium and violate text-book formulas of economist's quantity theory ( MV= PQ). To match the resulting quantity of money with the model assumption of a constant price, we have to impose boundary conditions. They either restrict specific transfers globally or impose transfers locally. Both connect through a general framework of transfer potentials. We show that either restricted or imposed transfers can shape Gaussian, tent-shape exponential, Boltzmann-exponential, pareto or periodic equilibrium distributions. We derive the master equation and find its general time-dependent approximate solution. An equivalent of quantity theory for random money transfer under the boundary conditions of transfer potentials is given.

  18. Spectral Study of Measles Epidemics: The Dependence of Spectral Gradient on the Population Size of the Community

    NASA Astrophysics Data System (ADS)

    Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi

    2003-02-01

    We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.

  19. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  20. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model.

    PubMed

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.

  1. Reproducibility of the exponential rise technique of CO(2) rebreathing for measuring P(v)CO(2) and C(v)CO(2 )to non-invasively estimate cardiac output during incremental, maximal treadmill exercise.

    PubMed

    Cade, W Todd; Nabar, Sharmila R; Keyser, Randall E

    2004-05-01

    The purpose of this study was to determine the reproducibility of the indirect Fick method for the measurement of mixed venous carbon dioxide partial pressure (P(v)CO(2)) and venous carbon dioxide content (C(v)CO(2)) for estimation of cardiac output (Q(c)), using the exponential rise method of carbon dioxide rebreathing, during non-steady-state treadmill exercise. Ten healthy participants (eight female and two male) performed three incremental, maximal exercise treadmill tests to exhaustion within 1 week. Non-invasive Q(c) measurements were evaluated at rest, during each 3-min stage, and at peak exercise, across three identical treadmill tests, using the exponential rise technique for measuring mixed venous PCO(2) and CCO(2) and estimating venous-arterio carbon dioxide content difference (C(v-a)CO(2)). Measurements were divided into measured or estimated variables [heart rate (HR), oxygen consumption (VO(2)), volume of expired carbon dioxide (VCO(2)), end-tidal carbon dioxide (P(ET)CO(2)), arterial carbon dioxide partial pressure (P(a)CO(2)), venous carbon dioxide partial pressure ( P(v)CO(2)), and C(v-a)CO(2)] and cardiorespiratory variables derived from the measured variables [Q(c), stroke volume (V(s)), and arteriovenous oxygen difference ( C(a-v)O(2))]. In general, the derived cardiorespiratory variables demonstrated acceptable (R=0.61) to high (R>0.80) reproducibility, especially at higher intensities and peak exercise. Measured variables, excluding P(a)CO(2) and C(v-a)CO(2), also demonstrated acceptable (R=0.6 to 0.79) to high reliability. The current study demonstrated acceptable to high reproducibility of the exponential rise indirect Fick method in measurement of mixed venous PCO(2) and CCO(2) for estimation of Q(c) during incremental treadmill exercise testing, especially at high-intensity and peak exercise.

  2. Topological Defects in Double Exchange Materials and Anomalous Hall Resistance.

    NASA Astrophysics Data System (ADS)

    Calderón, M. J.; Brey, L.

    2000-03-01

    Recently it has been proposed that the anomalous Hall effect observed in Double Exchange materials is due to Berry phase effects caused by carrier hopping in a nontrivial spins background (J.Ye et al.) Phys.Rev.Lett. 83, 3737 1999.In order to study this possibility we have performed Monte Carlo simulations of the Double Exchange model and we have computed, as a function of the temperature, the number of topological defects in the system and the internal gauge magnetic field associated with these defects. In the simplest Double Exchange model the gauge magnetic field is random, and its average value is zero. The inclusion in the problem of spin-orbit coupling privileges the opposite direction of the magnetization and an anomalous Hall resistance (AHR) effect arises. We have computed the AHR, and we have obtained its temperature dependence. In agreement with previous experiments we obtain that AHR increases exponentially at low temperature and presents a maximum at a temperature slightly higher than the critical temperature.

  3. Survival Bayesian Estimation of Exponential-Gamma Under Linex Loss Function

    NASA Astrophysics Data System (ADS)

    Rizki, S. W.; Mara, M. N.; Sulistianingsih, E.

    2017-06-01

    This paper elaborates a research of the cancer patients after receiving a treatment in cencored data using Bayesian estimation under Linex Loss function for Survival Model which is assumed as an exponential distribution. By giving Gamma distribution as prior and likelihood function produces a gamma distribution as posterior distribution. The posterior distribution is used to find estimatior {\\hat{λ }}BL by using Linex approximation. After getting {\\hat{λ }}BL, the estimators of hazard function {\\hat{h}}BL and survival function {\\hat{S}}BL can be found. Finally, we compare the result of Maximum Likelihood Estimation (MLE) and Linex approximation to find the best method for this observation by finding smaller MSE. The result shows that MSE of hazard and survival under MLE are 2.91728E-07 and 0.000309004 and by using Bayesian Linex worths 2.8727E-07 and 0.000304131, respectively. It concludes that the Bayesian Linex is better than MLE.

  4. Exponential Increase in Relative Biological Effectiveness Along Distal Edge of a Proton Bragg Peak as Measured by Deoxyribonucleic Acid Double-Strand Breaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuaron, John J., E-mail: cuaronj@mskcc.org; Chang, Chang; Lovelock, Michael

    2016-05-01

    Purpose: To quantify the relative biological effectiveness (RBE) of the distal edge of the proton Bragg peak, using an in vitro assay of DNA double-strand breaks (DSBs). Methods and Materials: U2OS cells were irradiated within the plateau of a spread-out Bragg peak and at each millimeter position along the distal edge using a custom slide holder, allowing for simultaneous measurement of physical dose. A reference radiation signal was generated using photons. The DNA DSBs at 3 hours (to assess for early damage) and at 24 hours (to assess for residual damage and repair) after irradiation were measured using the γH2AX assay and quantifiedmore » via flow cytometry. Results were confirmed with clonogenic survival assays. A detailed map of the RBE as a function of depth along the Bragg peak was generated using γH2AX measurements as a biological endpoint. Results: At 3 hours after irradiation, DNA DSBs were higher with protons at every point along the distal edge compared with samples irradiated with photons to similar doses. This effect was even more pronounced after 24 hours, indicating that the impact of DNA repair is less after proton irradiation relative to photons. The RBE demonstrated an exponential increase as a function of depth and was measured to be as high as 4.0 after 3 hours and as high as 6.0 after 24 hours. When the RBE-corrected dose was plotted as a function of depth, the peak effective dose was extended 2-3 mm beyond what would be expected with physical measurement. Conclusions: We generated a highly comprehensive map of the RBE of the distal edge of the Bragg peak, using a direct assay of DNA DSBs in vitro. Our data show that the RBE of the distal edge increases with depth and is significantly higher than previously reported estimates.« less

  5. Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.

    2001-01-01

    The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.

  6. On Using Exponential Parameter Estimators with an Adaptive Controller

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Typical adaptive controllers are restricted to using a specific update law to generate parameter estimates. This paper investigates the possibility of using any exponential parameter estimator with an adaptive controller such that the system tracks a desired trajectory. The goal is to provide flexibility in choosing any update law suitable for a given application. The development relies on a previously developed concept of controller/update law modularity in the adaptive control literature, and the use of a converse Lyapunov-like theorem. Stability analysis is presented to derive gain conditions under which this is possible, and inferences are made about the tracking error performance. The development is based on a class of Euler-Lagrange systems that are used to model various engineering systems including space robots and manipulators.

  7. A multi-tracer study in the Hutton Sandstone aquifer, Australia: How "wrong ages" give us deeper insights into aquifer structure and effective deep recharge to a double porosity system

    NASA Astrophysics Data System (ADS)

    Suckow, Axel; Taylor, Andrew; Davies, Phil; Leaney, Fred

    2017-04-01

    Depressurisation of coal seams in the Walloon Coal Measures in Queensland, Australia, may influence aquifers both over- and underlying the formation. The Gubberamunda Sandstone aquifer, which overlies the Walloon Coal Measures, is the starting point of the Great Artesian Basin (GAB) flow system and has been the focus of numerous recharge studies. In comparison, the Hutton Sandstone aquifer, which underlies the Walloon Coal Measures, has received much less attention. This aquifer however, is the main supply of stock water for the beef industry in the area. A multi-environmental tracer study of the Hutton Sandstone aquifer was undertaken at the Mimosa Syncline and was complemented by a few samples taken from the underlying Precipice Sandstone aquifer. This multi-tracer study (comprising 18O, 2H, 3H, CFCs, SF6, 14C, 36Cl, and 4He) demonstrated that the Hutton Sandstone aquifer behaves as a double porosity system. At the regional scale, the system features a relatively small fraction of conductive rock within a fairly large fraction of low permeability rock. Tracer migration therefore occurs mainly by advection in the conductive fraction and mainly by diffusion in the low-permeability fraction of the aquifer. Groundwater flow velocities, derived from exponential decrease of 14C and 36Cl concentrations with distance, differ by a factor of ten and therefore do not indicate the real groundwater flow velocity. However, accounting for a double porosity interpretation of the tracer data leads to a single groundwater flow velocity that is consistent with all observed data. Advective velocity in this double porosity model differs from face value flow velocities derived from 14C and 36Cl by a factor of 4 and 40 respectively. As a consequence of this interpretation, the deeper groundwater flow system of the Hutton Sandstone aquifer is estimated to receive only 3% of the recharge previously estimated using the Chloride Mass Balance approach at the intake beds. The other 97% is assumed to be rejected recharge which discharges through spring complexes in the Surat Basin and contributes to base flow of the Dawson River. This interpretation also suggests: 1) that the Hutton Sandstone aquifer is potentially more vulnerable to impacts from groundwater abstraction, including from stock and domestic water supply and coal seam gas production, than previously anticipated; 2) that other "groundwater age records" around the world likely observe similar double porosity effects and their apparent ages may be similarly distorted; and 3) that the multi-tracer approach used here is a suitable method for identifying other previously unknown double porosity aquifer systems and can potentially quantify deep effective recharge where important water resources are subject of economic development.

  8. Does the Australian desert ant Melophorus bagoti approximate a Lévy search by an intrinsic bi-modal walk?

    PubMed

    Reynolds, Andy M; Schultheiss, Patrick; Cheng, Ken

    2014-01-07

    We suggest that the Australian desert ant Melophorus bagoti approximates a Lévy search pattern by using an intrinsic bi-exponential walk and does so when a Lévy search pattern is advantageous. When attempting to locate its nest, M. bagoti adopt a stereotypical search pattern. These searches begin at the location where the ant expects to find the nest, and comprise loops that start and end at this location, and are directed in different azimuthal directions. Loop lengths are exponentially distributed when searches are in visually familiar surroundings and are well described by a mixture of two exponentials when searches are in unfamiliar landscapes. The latter approximates a power-law distribution, the hallmark of a Lévy search. With the aid of a simple analytically tractable theory, we show that an exponential loop-length distribution is advantageous when the distance to the nest can be estimated with some certainty and that a bi-exponential distribution is advantageous when there is considerable uncertainty regarding the nest location. The best bi-exponential search patterns are shown to be those that come closest to approximating advantageous Lévy looping searches. The bi-exponential search patterns of M. bagoti are found to approximate advantageous Lévy search patterns. Copyright © 2013. Published by Elsevier Ltd.

  9. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  10. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  11. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    PubMed

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  12. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS

    PubMed Central

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2015-01-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910

  13. Observers for a class of systems with nonlinearities satisfying an incremental quadratic inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Martin, Corless

    2004-01-01

    We consider the problem of state estimation from nonlinear time-varying system whose nonlinearities satisfy an incremental quadratic inequality. Observers are presented which guarantee that the state estimation error exponentially converges to zero.

  14. PSA doubling time of prostate carcinoma managed with watchful observation alone.

    PubMed

    Choo, R; DeBoer, G; Klotz, L; Danjoux, C; Morton, G C; Rakovitch, E; Fleshner, N; Bunting, P; Kapusta, L; Hruby, G

    2001-07-01

    To study prostate-specific antigen (PSA) doubling time of untreated, favorable grade, prostate carcinoma. A prospective single-arm cohort study has been in progress to assess the feasibility of a watchful observation protocol with selective delayed intervention using clinical, histologic, or PSA progression as treatment indication in untreated, localized, favorable grade prostate adenocarcinoma (T1b-T2bN0 M0, Gleason Score < or = 7, and PSA < or = 15 ng/mL). Patients are conservatively managed with watchful observation alone, as long as they do not meet the arbitrarily defined disease progression criteria. Patients are followed regularly and undergo blood tests including PSA at each visit. PSA doubling time (Td) is estimated from a linear regression of ln(PSA) on time, assuming a simple exponential growth model. As of March 2000, 134 patients have been on the study for a minimum of 12 months (median, 24; range, 12-52) and have a median frequency of PSA measurement of 7 times (range, 3-15). Median age is 70 years. Median PSA at enrollment is 6.3 (range, 0.5-14.6). The distribution of Td is as follows: <2 years, 19 patients; 2-5 years, 46; 5-10 years, 25; 10-20 years, 11; 20-50 years, 6; > 50 years, 27. The median Td is 5.1 years. In 44 patients (33%), Td is greater than 10 years. There was no correlation between Td and patient age, clinical T stage, Gleason score, or initial PSA level. Td of untreated prostate cancer varies widely. In our cohort, 33% have Td > 10 years. Td may be a useful tool to guide treatment intervention for patients managed conservatively with watchful observation alone.

  15. Hardware accelerator of convolution with exponential function for image processing applications

    NASA Astrophysics Data System (ADS)

    Panchenko, Ivan; Bucha, Victor

    2015-12-01

    In this paper we describe a Hardware Accelerator (HWA) for fast recursive approximation of separable convolution with exponential function. This filter can be used in many Image Processing (IP) applications, e.g. depth-dependent image blur, image enhancement and disparity estimation. We have adopted this filter RTL implementation to provide maximum throughput in constrains of required memory bandwidth and hardware resources to provide a power-efficient VLSI implementation.

  16. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  17. Atmospheric Transmittance/Radiance: Computer Code LOWTRAN 6

    DTIC Science & Technology

    1983-08-01

    1966) The refractive index of air, Metrologia 2:12, ൞ -1...sight. For an optical path traversing N layers in an upward or downward direction this process gives N [ 7 A+M A -SCAT --SUN I e,ps+op > AIV < La ... a...for cirrus normal transmittance, r, of the form = exp - (0. 14 LA ) (49) This expression closely duplicates the double exponential model of Davis 4 0 for

  18. How Many Conformations Need To Be Sampled To Obtain Converged QM/MM Energies? The Curse of Exponential Averaging.

    PubMed

    Ryde, Ulf

    2017-11-14

    Combined quantum mechanical and molecular mechanical (QM/MM) calculations is a popular approach to study enzymatic reactions. They are often based on a set of minimized structures obtained on snapshots from a molecular dynamics simulation to include some dynamics of the enzyme. It has been much discussed how the individual energies should be combined to obtain a final estimate of the energy, but the current consensus seems to be to use an exponential average. Then, the question is how many snapshots are needed to reach a reliable estimate of the energy. In this paper, I show that the question can be easily be answered if it is assumed that the energies follow a Gaussian distribution. Then, the outcome can be simulated based on a single parameter, σ, the standard deviation of the QM/MM energies from the various snapshots, and the number of required snapshots can be estimated once the desired accuracy and confidence of the result has been specified. Results for various parameters are presented, and it is shown that many more snapshots are required than is normally assumed. The number can be reduced by employing a cumulant approximation to second order. It is shown that most convergence criteria work poorly, owing to the very bad conditioning of the exponential average when σ is large (more than ∼7 kJ/mol), because the energies that contribute most to the exponential average have a very low probability. On the other hand, σ serves as an excellent convergence criterion.

  19. Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas

    PubMed Central

    Philibert, Aurore; Loyce, Chantal; Makowski, David

    2012-01-01

    Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430

  20. Kinetics of DNA Tile Dimerization

    PubMed Central

    2015-01-01

    Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile–tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency. PMID:24794259

  1. Kinetics of DNA tile dimerization.

    PubMed

    Jiang, Shuoxing; Yan, Hao; Liu, Yan

    2014-06-24

    Investigating how individual molecular components interact with one another within DNA nanoarchitectures, both in terms of their spatial and temporal interactions, is fundamentally important for a better understanding of their physical behaviors. This will provide researchers with valuable insight for designing more complex higher-order structures that can be assembled more efficiently. In this report, we examined several spatial factors that affect the kinetics of bivalent, double-helical (DH) tile dimerization, including the orientation and number of sticky ends (SEs), the flexibility of the double helical domains, and the size of the tiles. The rate constants we obtained confirm our hypothesis that increased nucleation opportunities and well-aligned SEs accelerate tile-tile dimerization. Increased flexibility in the tiles causes slower dimerization rates, an effect that can be reversed by introducing restrictions to the tile flexibility. The higher dimerization rates of more rigid tiles results from the opposing effects of higher activation energies and higher pre-exponential factors from the Arrhenius equation, where the pre-exponential factor dominates. We believe that the results presented here will assist in improved implementation of DNA tile based algorithmic self-assembly, DNA based molecular robotics, and other specific nucleic acid systems, and will provide guidance to design and assembly processes to improve overall yield and efficiency.

  2. Compatible estimators of the components of change for a rotating panel forest inventory design

    Treesearch

    Francis A. Roesch

    2007-01-01

    This article presents two approaches for estimating the components of forest change utilizing data from a rotating panel sample design. One approach uses a variant of the exponentially weighted moving average estimator and the other approach uses mixed estimation. Three general transition models were each combined with a single compatibility model for the mixed...

  3. Gradient-based stochastic estimation of the density matrix

    NASA Astrophysics Data System (ADS)

    Wang, Zhentao; Chern, Gia-Wei; Batista, Cristian D.; Barros, Kipton

    2018-03-01

    Fast estimation of the single-particle density matrix is key to many applications in quantum chemistry and condensed matter physics. The best numerical methods leverage the fact that the density matrix elements f(H)ij decay rapidly with distance rij between orbitals. This decay is usually exponential. However, for the special case of metals at zero temperature, algebraic decay of the density matrix appears and poses a significant numerical challenge. We introduce a gradient-based probing method to estimate all local density matrix elements at a computational cost that scales linearly with system size. For zero-temperature metals, the stochastic error scales like S-(d+2)/2d, where d is the dimension and S is a prefactor to the computational cost. The convergence becomes exponential if the system is at finite temperature or is insulating.

  4. Compensation of strong thermal lensing in high-optical-power cavities.

    PubMed

    Zhao, C; Degallaix, J; Ju, L; Fan, Y; Blair, D G; Slagmolen, B J J; Gray, M B; Lowry, C M Mow; McClelland, D E; Hosken, D J; Mudge, D; Brooks, A; Munch, J; Veitch, P J; Barton, M A; Billingsley, G

    2006-06-16

    In an experiment to simulate the conditions in high optical power advanced gravitational wave detectors, we show for the first time that the time evolution of strong thermal lenses follows the predicted infinite sum of exponentials (approximated by a double exponential), and that such lenses can be compensated using an intracavity compensation plate heated on its cylindrical surface. We show that high finesse approximately 1400 can be achieved in cavities with internal compensation plates, and that mode matching can be maintained. The experiment achieves a wave front distortion similar to that expected for the input test mass substrate in the Advanced Laser Interferometer Gravitational Wave Observatory, and shows that thermal compensation schemes are viable. It is also shown that the measurements allow a direct measurement of substrate optical absorption in the test mass and the compensation plate.

  5. Objective-function hybridization in adjoint seismic tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Yanhua O.; Bozdaǧ, Ebru; Simons, Frederik J.; Gao, Fuchun

    2017-04-01

    Seismic tomography is at the threshold of a new era of massive data sets. Improving the resolution and accuracy of the estimated Earth structure by assimilating as much information as possible from every seismogram, remains a challenge. We propose the use of the "exponentiated phase'', a type of measurement that robustly captures the information contained in the variation of phase with time in the seismogram. We explore its performance in both conventional and double-difference (Yuan, Simons & Tromp, Geophys. J. Intern, 2016) adjoint seismic tomography. We introduce a hybrid approach to combine different objective functions, taking advantage of both conventional and our new measurements. We initially focus on phase measurements in global tomography. Cross-correlation measurements are generally tailored by window selection algorithms, such as FLEXWIN, to balance amplitude differences between seismic phases. However, within selection windows, such measurements still favor the larger-amplitude phases. It is also difficult to select all usable portions of the seismogram in an optimal way, such that much information may be lost, particularly the scattered waves. Time-continuous phase measurements, which associate a time shift with each point in time, have the potential to extract information from every wiggle in the seismogram without cutting it into small pieces. One such type of measurement is the instantaneous phase (Bozdaǧ, Trampert & Tromp, Geophys. J. Intern, 2011), which thus far has not been implemented in realistic seismic-tomography experiments, given how difficult the computation of phase can sometimes be. The exponentiated phase, on the other hand, is computed on the basis of the normalized analytic signal, does not need an explicit measure of phase, and is thus much easier to implement, and more practical for real-world applications. Both types of measurements carry comparable structural information when direct measurements of the phase are not wrapped. To deal with cycle skips, we use the exponentiated phase to take into account relatively small-magnitude scattered waves at long periods, while using cross-correlation measurements on windows determined by FLEXWIN to select distinct body-wave arrivals without complicating measurements due to non-linearities at short periods. We present synthetic experiments to show how exponentiated-phase, cross-correlation measurements, and their hybridization affect tomographic results. We demonstrate the use of hybrid measurements on teleseismic seismograms, in which surface waves are prominent, for continental and global seismic imaging. It is clear that the exponentiated-phase measurements behave well and provide a better representation of the smaller phases in the adjoint sources required for the computation of the misfit gradient. The combination of two different types of phase measurements in a hybrid approach moves us towards using all of the available information in a data set, addressing data quality and measurement challenges simultaneously, while negligibly affecting computation time.

  6. New exponential synchronization criteria for time-varying delayed neural networks with discontinuous activations.

    PubMed

    Cai, Zuowei; Huang, Lihong; Zhang, Lingling

    2015-05-01

    This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Biological electric fields and rate equations for biophotons.

    PubMed

    Alvermann, M; Srivastava, Y N; Swain, J; Widom, A

    2015-04-01

    Biophoton intensities depend upon the squared modulus of the electric field. Hence, we first make some general estimates about the inherent electric fields within various biosystems. Generally, these intensities do not follow a simple exponential decay law. After a brief discussion on the inapplicability of a linear rate equation that leads to strict exponential decay, we study other, nonlinear rate equations that have been successfully used for biosystems along with their physical origins when available.

  8. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  9. Impulsive effect on global exponential stability of BAM fuzzy cellular neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Li, Kelin

    2010-02-01

    In this article, a class of impulsive bidirectional associative memory (BAM) fuzzy cellular neural networks (FCNNs) with time-varying delays is formulated and investigated. By employing delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM FCNNs with time-varying delays are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM FCNNs. An example is given to show the effectiveness of the results obtained here.

  10. Estimating Distances from Parallaxes. II. Performance of Bayesian Distance Estimators on a Gaia-like Catalogue

    NASA Astrophysics Data System (ADS)

    Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.

    2016-12-01

    Estimating a distance by inverting a parallax is only valid in the absence of noise. As most stars in the Gaia catalog will have non-negligible fractional parallax errors, we must treat distance estimation as a constrained inference problem. Here we investigate the performance of various priors for estimating distances, using a simulated Gaia catalog of one billion stars. We use three minimalist, isotropic priors, as well an anisotropic prior derived from the observability of stars in a Milky Way model. The two priors that assume a uniform distribution of stars—either in distance or in space density—give poor results: The root mean square fractional distance error, {f}{rms}, grows far in excess of 100% once the fractional parallax error, {f}{true}, is larger than 0.1. A prior assuming an exponentially decreasing space density with increasing distance performs well once its single parameter—the scale length— has been set to an appropriate value: {f}{rms} is roughly equal to {f}{true} for {f}{true}\\lt 0.4, yet does not increase further as {f}{true} increases up to to 1.0. The Milky Way prior performs well except toward the Galactic center, due to a mismatch with the (simulated) data. Such mismatches will be inevitable (and remain unknown) in real applications, and can produce large errors. We therefore suggest adopting the simpler exponentially decreasing space density prior, which is also less time-consuming to compute. Including Gaia photometry improves the distance estimation significantly for both the Milky Way and exponentially decreasing space density prior, yet doing so requires additional assumptions about the physical nature of stars.

  11. Estimating piecewise exponential frailty model with changing prior for baseline hazard function

    NASA Astrophysics Data System (ADS)

    Thamrin, Sri Astuti; Lawi, Armin

    2016-02-01

    Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.

  12. Estimating Distances from Parallaxes. III. Distances of Two Million Stars in the Gaia DR1 Catalogue

    NASA Astrophysics Data System (ADS)

    Astraatmadja, Tri L.; Bailer-Jones, Coryn A. L.

    2016-12-01

    We infer distances and their asymmetric uncertainties for two million stars using the parallaxes published in the Gaia DR1 (GDR1) catalogue. We do this with two distance priors: A minimalist, isotropic prior assuming an exponentially decreasing space density with increasing distance, and an anisotropic prior derived from the observability of stars in a Milky Way model. We validate our results by comparing our distance estimates for 105 Cepheids which have more precise, independently estimated distances. For this sample we find that the Milky Way prior performs better (the rms of the scaled residuals is 0.40) than the exponentially decreasing space density prior (rms is 0.57), although for distances beyond 2 kpc the Milky Way prior performs worse, with a bias in the scaled residuals of -0.36 (versus -0.07 for the exponentially decreasing space density prior). We do not attempt to include the photometric data in GDR1 due to the lack of reliable color information. Our distance catalog is available at http://www.mpia.de/homes/calj/tgas_distances/main.html as well as at CDS. This should only be used to give individual distances. Combining data or testing models should be done with the original parallaxes, and attention paid to correlated and systematic uncertainties.

  13. A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times

    PubMed Central

    Heath, Tracy A.

    2012-01-01

    In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343

  14. Very slow lava extrusion continued for more than five years after the 2011 Shinmoedake eruption observed from SAR interferometry

    NASA Astrophysics Data System (ADS)

    Ozawa, T.; Miyagi, Y.

    2017-12-01

    Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.

  15. Unfolding of Ubiquitin Studied by Picosecond Time-Resolved Fluorescence of the Tyrosine Residue

    PubMed Central

    Noronha, Melinda; Lima, João C.; Bastos, Margarida; Santos, Helena; Maçanita, António L.

    2004-01-01

    The photophysics of the single tyrosine in bovine ubiquitin (UBQ) was studied by picosecond time-resolved fluorescence spectroscopy, as a function of pH and along thermal and chemical unfolding, with the following results: First, at room temperature (25°C) and below pH 1.5, native UBQ shows single-exponential decays. From pH 2 to 7, triple-exponential decays were observed and the three decay times were attributed to the presence of tyrosine, a tyrosine-carboxylate hydrogen-bonded complex, and excited-state tyrosinate. Second, at pH 1.5, the water-exposed tyrosine of either thermally or chemically unfolded UBQ decays as a sum of two exponentials. The double-exponential decays were interpreted and analyzed in terms of excited-state intramolecular electron transfer from the phenol to the amide moiety, occurring in one of the three rotamers of tyrosine in UBQ. The values of the rate constants indicate the presence of different unfolded states and an increase in the mobility of the tyrosine residue during unfolding. Finally, from the pre-exponential coefficients of the fluorescence decays, the unfolding equilibrium constants (KU) were calculated, as a function of temperature or denaturant concentration. Despite the presence of different unfolded states, both thermal and chemical unfolding data of UBQ could be fitted to a two-state model. The thermodynamic parameters Tm = 54.6°C, ΔHTm = 56.5 kcal/mol, and ΔCp = 890 cal/mol//K, were determined from the unfolding equilibrium constants calculated accordingly, and compared to values obtained by differential scanning calorimetry also under the assumption of a two-state transition, Tm = 57.0°C, ΔHm= 51.4 kcal/mol, and ΔCp = 730 cal/mol//K. PMID:15454455

  16. Analysis of volumetric response of pituitary adenomas receiving adjuvant CyberKnife stereotactic radiosurgery with the application of an exponential fitting model

    PubMed Central

    Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan

    2017-01-01

    Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913

  17. Numerical analysis of spectral properties of coupled oscillator Schroedinger operators. I - Single and double well anharmonic oscillators

    NASA Technical Reports Server (NTRS)

    Isaacson, D.; Isaacson, E. L.; Paes-Leme, P. J.; Marchesin, D.

    1981-01-01

    Several methods for computing many eigenvalues and eigenfunctions of a single anharmonic oscillator Schroedinger operator whose potential may have one or two minima are described. One of the methods requires the solution of an ill-conditioned generalized eigenvalue problem. This method has the virtue of using a bounded amount of work to achieve a given accuracy in both the single and double well regions. Rigorous bounds are given, and it is proved that the approximations converge faster than any inverse power of the size of the matrices needed to compute them. The results of computations for the g:phi(4):1 theory are presented. These results indicate that the methods actually converge exponentially fast.

  18. Polynomials with Restricted Coefficients and Their Applications

    DTIC Science & Technology

    1987-01-01

    sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with 𔄃 = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient

  19. Estimation of renal allograft half-life: fact or fiction?

    PubMed

    Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor

    2011-09-01

    Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.

  20. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  1. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  2. A generalized exponential link function to map a conflict indicator into severity index within safety continuum framework.

    PubMed

    Zheng, Lai; Ismail, Karim

    2017-05-01

    Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  4. Exponentiated power Lindley distribution.

    PubMed

    Ashour, Samir K; Eltehiwy, Mahmoud A

    2015-11-01

    A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.

  5. Scaling in the distribution of intertrade durations of Chinese stocks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing

    2008-10-01

    The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.

  6. Strategies to control a common carp population by pulsed commercial harvest

    USGS Publications Warehouse

    Colvin, Michael E.; Pierce, Clay; Stewart, Timothy W.; Grummer, Scott E.

    2012-01-01

    Commercial fisheries are commonly used to manage nuisance fishes in freshwater systems, but such efforts are often unsuccessful. Strategies for successfully controlling a nuisance population of common carp Cyprinus carpio by pulsed commercial harvest were evaluated with a combination of (1) field sampling, (2) population estimation and CPUE indexing, and (3) simulation using an exponential semidiscrete biomass dynamics model (SDBDM). The range of annual fishing mortalities (F) that resulted in successful control (F = 0.244–0.265) was narrow. Common carp biomass dynamics were sensitive to unintentional underharvest due to high rates of surplus production and a biomass doubling time of 2.7 years. Simulations indicated that biomanipulation never achieved successful control unless supplemental fishing mortality was imposed. Harvest of a majority of annual production was required to achieve successful control, as indicated by the ecotrophic coefficient (EC). Readily available biomass data and tools such as SDBDMs and ECs can be used in an adaptive management framework to successfully control common carp and other nuisance fishes by pulsed commercial fishing.

  7. Star formation suppression and bar ages in nearby barred galaxies

    NASA Astrophysics Data System (ADS)

    James, P. A.; Percival, S. M.

    2018-03-01

    We present new spectroscopic data for 21 barred spiral galaxies, which we use to explore the effect of bars on disc star formation, and to place constraints on the characteristic lifetimes of bar episodes. The analysis centres on regions of heavily suppressed star formation activity, which we term `star formation deserts'. Long-slit optical spectroscopy is used to determine H β absorption strengths in these desert regions, and comparisons with theoretical stellar population models are used to determine the time since the last significant star formation activity, and hence the ages of the bars. We find typical ages of ˜1 Gyr, but with a broad range, much larger than would be expected from measurement errors alone, extending from ˜0.25 to >4 Gyr. Low-level residual star formation, or mixing of stars from outside the `desert' regions, could result in a doubling of these age estimates. The relatively young ages of the underlying populations coupled with the strong limits on the current star formation rule out a gradual exponential decline in activity, and hence support our assumption of an abrupt truncation event.

  8. Combining Orthogonal Chain-End Deprotections and Thiol-Maleimide Michael Coupling: Engineering Discrete Oligomers by an Iterative Growth Strategy.

    PubMed

    Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin

    2017-10-23

    Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Ultrafast hole carrier relaxation dynamics in p-type CuO nanowires

    PubMed Central

    2011-01-01

    Ultrafast hole carrier relaxation dynamics in CuO nanowires have been investigated using transient absorption spectroscopy. Following femtosecond pulse excitation in a non-collinear pump-probe configuration, a combination of non-degenerate transmission and reflection measurements reveal initial ultrafast state filling dynamics independent of the probing photon energy. This behavior is attributed to the occupation of states by photo-generated carriers in the intrinsic hole region of the p-type CuO nanowires located near the top of the valence band. Intensity measurements indicate an upper fluence threshold of 40 μJ/cm2 where carrier relaxation is mainly governed by the hole dynamics. The fast relaxation of the photo-generated carriers was determined to follow a double exponential decay with time constants of 0.4 ps and 2.1 ps. Furthermore, time-correlated single photon counting measurements provide evidence of three exponential relaxation channels on the nanosecond timescale. PMID:22151927

  10. Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG

    PubMed Central

    Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.

    2015-01-01

    We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573

  11. Exponential asymptotics of homoclinic snaking

    NASA Astrophysics Data System (ADS)

    Dean, A. D.; Matthews, P. C.; Cox, S. M.; King, J. R.

    2011-12-01

    We study homoclinic snaking in the cubic-quintic Swift-Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319-54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement.

  12. Global exponential stability of bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Song, Qiankun; Cao, Jinde

    2007-05-01

    A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.

  13. Biases in simulation of the rice phenology models when applied in warmer climates

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, T.; Yang, X.; Simelton, E.

    2015-12-01

    The current model inter-comparison studies highlight the difference in projections between crop models when they are applied to warmer climates, but these studies do not provide results on how the accuracy of the models would change in these projections because the adequate observations under largely diverse growing season temperature (GST) are often unavailable. Here, we investigate the potential changes in the accuracy of rice phenology models when these models were applied to a significantly warmer climate. We collected phenology data from 775 trials with 19 cultivars in 5 Asian countries (China, India, Philippines, Bangladesh and Thailand). Each cultivar encompasses the phenology observations under diverse GST regimes. For a given rice cultivar in different trials, the GST difference reaches 2.2 to 8.2°C, which allows us to calibrate the models under lower GST and validate under higher GST (i.e., warmer climates). Four common phenology models representing major algorithms on simulations of rice phenology, and three model calibration experiments were conducted. The results suggest that the bilinear and beta models resulted in gradually increasing phenology bias (Figure) and double yield bias per percent increase in phenology bias, whereas the growing-degree-day (GDD) and exponential models maintained a comparatively constant bias when applied in warmer climates (Figure). Moreover, the bias of phenology estimated by the bilinear and beta models did not reduce with increase in GST when all data were used to calibrate models. These suggest that variations in phenology bias are primarily attributed to intrinsic properties of the respective phenology model rather than on the calibration dataset. Therefore we conclude that using the GDD and exponential models has more chances of predicting rice phenology correctly and thus, production under warmer climates, and result in effective agricultural strategic adaptation to and mitigation of climate change.

  14. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  15. Pendulum Mass Affects the Measurement of Articular Friction Coefficient

    PubMed Central

    Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.

    2012-01-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223

  16. Pendulum mass affects the measurement of articular friction coefficient.

    PubMed

    Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C

    2013-02-01

    Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Development and growth of fruit bodies and crops of the button mushroom, Agaricus bisporus.

    PubMed

    Straatsma, Gerben; Sonnenberg, Anton S M; van Griensven, Leo J L D

    2013-10-01

    We studied the appearance of fruit body primordia, the growth of individual fruit bodies and the development of the consecutive flushes of the crop. Relative growth, measured as cap expansion, was not constant. It started extremely rapidly, and slowed down to an exponential rate with diameter doubling of 1.7 d until fruit bodies showed maturation by veil breaking. Initially many outgrowing primordia were arrested, indicating nutritional competition. After reaching 10 mm diameter, no growth arrest occurred; all growing individuals, whether relatively large or small, showed an exponential increase of both cap diameter and biomass, until veil breaking. Biomass doubled in 0.8 d. Exponential growth indicates the absence of competition. Apparently there exist differential nutritional requirements for early growth and for later, continuing growth. Flushing was studied applying different picking sizes. An ordinary flushing pattern occurred at an immature picking size of 8 mm diameter (picking mushrooms once a day with a diameter above 8 mm). The smallest picking size yielded the highest number of mushrooms picked, confirming the competition and arrested growth of outgrowing primordia: competition seems less if outgrowing primordia are removed early. The flush duration (i.e. between the first and last picking moments) was not affected by picking size. At small picking size, the subsequent flushes were not fully separated in time but overlapped. Within 2 d after picking the first individuals of the first flush, primordia for the second flush started outgrowth. Our work supports the view that the acquisition of nutrients by the mycelium is demand rather than supply driven. For formation and early outgrowth of primordia, indications were found for an alternation of local and global control, at least in the casing layer. All these data combined, we postulate that flushing is the consequence of the depletion of some unknown specific nutrition required by outgrowing primordia. Copyright © 2013 The British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  18. Universal state-selective corrections to multireference coupled-cluster theories with single and double excitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri

    2012-03-28

    The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less

  19. Econometrics of exhaustible resource supply: a theory and an application. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epple, D.; Hansen, L.P.

    1981-12-01

    An econometric model of US oil and natural gas discoveries is developed in this study. The econometric model is explicitly derived as the solution to the problem of maximizing the expected discounted after tax present value of revenues net of exploration, development, and production costs. The model contains equations representing producers' formation of price expectations and separate equations giving producers' optimal exploration decisions contingent on expected prices. A procedure is developed for imposing resource base constraints (e.g., ultimate recovery estimates based on geological analysis) when estimating the econometric model. The model is estimated using aggregate post-war data for the Unitedmore » States. Production from a given addition to proved reserves is assumed to follow a negative exponential path, and additions of proved reserves from a given discovery are assumed to follow a negative exponential path. Annual discoveries of oil and natural gas are estimated as latent variables. These latent variables are the endogenous variables in the econometric model of oil and natural gas discoveries. The model is estimated without resource base constraints. The model is also estimated imposing the mean oil and natural gas ultimate recovery estimates of the US Geological Survey. Simulations through the year 2020 are reported for various future price regimes.« less

  20. Superionic state in double-layer capacitors with nanoporous electrodes.

    PubMed

    Kondrat, S; Kornyshev, A

    2011-01-19

    In recent experiments (Chmiola et al 2006 Science 313 1760; Largeot et al 2008 J. Am. Chem. Soc. 130 2730) an anomalous increase of the capacitance with a decrease of the pore size of a carbon-based porous electric double-layer capacitor has been observed. We explain this effect by image forces which exponentially screen out the electrostatic interactions of ions in the interior of a pore. Packing of ions of the same sign becomes easier and is mainly limited by steric interactions. We call this state 'superionic' and suggest a simple model to describe it. The model reveals the possibility of a voltage-induced first order transition between a cation(anion)-deficient phase and a cation(anion)-rich phase which manifests itself in a jump of capacitance as a function of voltage.

  1. Generalized optimal design for two-arm, randomized phase II clinical trials with endpoints from the exponential dispersion family.

    PubMed

    Jiang, Wei; Mahnken, Jonathan D; He, Jianghua; Mayo, Matthew S

    2016-11-01

    For two-arm randomized phase II clinical trials, previous literature proposed an optimal design that minimizes the total sample sizes subject to multiple constraints on the standard errors of the estimated event rates and their difference. The original design is limited to trials with dichotomous endpoints. This paper extends the original approach to be applicable to phase II clinical trials with endpoints from the exponential dispersion family distributions. The proposed optimal design minimizes the total sample sizes needed to provide estimates of population means of both arms and their difference with pre-specified precision. Its applications on data from specific distribution families are discussed under multiple design considerations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Observer-based sliding mode control of Markov jump systems with random sensor delays and partly unknown transition rates

    NASA Astrophysics Data System (ADS)

    Yao, Deyin; Lu, Renquan; Xu, Yong; Ren, Hongru

    2017-10-01

    In this paper, the sliding mode control problem of Markov jump systems (MJSs) with unmeasured state, partly unknown transition rates and random sensor delays is probed. In the practical engineering control, the exact information of transition rates is hard to obtain and the measurement channel is supposed to subject to random sensor delay. Design a Luenberger observer to estimate the unmeasured system state, and an integral sliding mode surface is constructed to ensure the exponential stability of MJSs. A sliding mode controller based on estimator is proposed to drive the system state onto the sliding mode surface and render the sliding mode dynamics exponentially mean-square stable with H∞ performance index. Finally, simulation results are provided to illustrate the effectiveness of the proposed results.

  3. Optimal estimation for the satellite attitude using star tracker measurements

    NASA Technical Reports Server (NTRS)

    Lo, J. T.-H.

    1986-01-01

    An optimal estimation scheme is presented, which determines the satellite attitude using the gyro readings and the star tracker measurements of a commonly used satellite attitude measuring unit. The scheme is mainly based on the exponential Fourier densities that have the desirable closure property under conditioning. By updating a finite and fixed number of parameters, the conditional probability density, which is an exponential Fourier density, is recursively determined. Simulation results indicate that the scheme is more accurate and robust than extended Kalman filtering. It is believed that this approach is applicable to many other attitude measuring units. As no linearization and approximation are necessary in the approach, it is ideal for systems involving high levels of randomness and/or low levels of observability and systems for which accuracy is of overriding importance.

  4. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  5. Essays on the statistical mechanics of the labor market and implications for the distribution of earned income

    NASA Astrophysics Data System (ADS)

    Schneider, Markus P. A.

    This dissertation contributes to two areas in economics: the understanding of the distribution of earned income and to Bayesian analysis of distributional data. Recently, physicists claimed that the distribution of earned income is exponential (see Yakovenko, 2009). The first chapter explores the perspective that the economy is a statistical mechanical system and the implication for labor market outcomes is considered critically. The robustness of the empirical results that lead to the physicists' claims, the significance of the exponential distribution in statistical mechanics, and the case for a conservation law in economics are discussed. The conclusion reached is that physicists' conception of the economy is too narrow even within their chosen framework, but that their overall approach is insightful. The dual labor market theory of segmented labor markets is invoked to understand why the observed distribution may be a mixture of distributional components, corresponding to different generating mechanisms described in Reich et al. (1973). The application of informational entropy in chapter II connects this work to Bayesian analysis and maximum entropy econometrics. The analysis follows E. T. Jaynes's treatment of Wolf's dice data, but is applied to the distribution of earned income based on CPS data. The results are calibrated to account for rounded survey responses using a simple simulation, and answer the graphical analyses by physicists. The results indicate that neither the income distribution of all respondents nor of the subpopulation used by physicists appears to be exponential. The empirics do support the claim that a mixture with exponential and log-normal distributional components ts the data. In the final chapter, a log-linear model is used to fit the exponential to the earned income distribution. Separating the CPS data by gender and marital status reveals that the exponential is only an appropriate model for a limited number of subpopulations, namely the never married and women. The estimated parameter for never-married men's incomes is significantly different from the parameter estimated for never-married women, implying that either the combined distribution is not exponential or that the individual distributions are not exponential. However, it substantiates the existence of a persistent gender income gap among the never-married. References: Reich, M., D. M. Gordon, and R. C. Edwards (1973). A Theory of Labor Market Segmentation. Quarterly Journal of Economics 63, 359-365. Yakovenko, V. M. (2009). Econophysics, Statistical Mechanics Approach to. In R. A. Meyers (Ed.), Encyclopedia of Complexity and System Science. Springer.

  6. Bayesian exponential random graph modelling of interhospital patient referral networks.

    PubMed

    Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro

    2017-08-15

    Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  8. Regulation of Hemopoietic Stem Cell Turnover and Population Size in Neonatal Mice

    DTIC Science & Technology

    1975-04-01

    Following birth the hematopoietic stem cell population of the liver as measured by the in vivo spleen nodule assay (CFU) declines with a halving time...of about 48 hours. The stem cell population of the spleen grows exponentially with a doubling time of about 17 hours. In vitro incubation with high...single spleen colonies derived from neonatal liver and spleen CFU that both stem cell populations have a high self-renewal capacity. Thus, the decline in

  9. Changing Mindsets to Transform Security: Leader Development for an Unpredictable and Complex World

    DTIC Science & Technology

    2013-01-01

    fields of phys- ical science, the amount of information is doubling every one to two years, meaning that more than half of what a college student has...beyond a review of current events or it being at a “ informational ” level. Naval War College Professor Mackubin Owens stated in 2006, that, The new... information technology in education and training underpinned by a sta- ble and experienced academic community that can support the exponential growth

  10. Analytical solution for boundary heat fluxes from a radiating rectangular medium

    NASA Technical Reports Server (NTRS)

    Siegel, R.

    1991-01-01

    Reference is made to the work of Shah (1979) which demonstrated the possibility of partially integrating the radiative equations analytically to obtain an 'exact' solution. Shah's solution was given as a double integration of the modified Bessel function of order zero. Here, it is shown that the 'exact' solution for a rectangular region radiating to cold black walls can be conveniently derived, and expressed in simple form, by using an integral function, Sn, analogous to the exponential integral function appearing in plane-layer solutions.

  11. Strain, curvature, and twist measurements in digital holographic interferometry using pseudo-Wigner-Ville distribution based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2009-09-15

    Measurement of strain, curvature, and twist of a deformed object play an important role in deformation analysis. Strain depends on the first order displacement derivative, whereas curvature and twist are determined by second order displacement derivatives. This paper proposes a pseudo-Wigner-Ville distribution based method for measurement of strain, curvature, and twist in digital holographic interferometry where the object deformation or displacement is encoded as interference phase. In the proposed method, the phase derivative is estimated by peak detection of pseudo-Wigner-Ville distribution evaluated along each row/column of the reconstructed interference field. A complex exponential signal with unit amplitude and the phasemore » derivative estimate as the argument is then generated and the pseudo-Wigner-Ville distribution along each row/column of this signal is evaluated. The curvature is estimated by using peak tracking strategy for the new distribution. For estimation of twist, the pseudo-Wigner-Ville distribution is evaluated along each column/row (i.e., in alternate direction with respect to the previous one) for the generated complex exponential signal and the corresponding peak detection gives the twist estimate.« less

  12. Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.

    PubMed

    Cobbs, Gary

    2012-08-16

    Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.

  13. Clinical proteomics in kidney disease as an exponential technology: heading towards the disruptive phase.

    PubMed

    Sanchez-Niño, Maria Dolores; Sanz, Ana B; Ramos, Adrian M; Fernandez-Fernandez, Beatriz; Ortiz, Alberto

    2017-04-01

    Exponential technologies double in power or processing speed every year, whereas their cost halves. Deception and disruption are two key stages in the development of exponential technologies. Deception occurs when, after initial introduction, technologies are dismissed as irrelevant, while they continue to progress, perhaps not as fast or with so many immediate practical applications as initially thought. Twenty years after the first publications, clinical proteomics is still not available in most hospitals and some clinicians have felt deception at unfulfilled promises. However, there are indications that clinical proteomics may be entering the disruptive phase, where, once refined, technologies disrupt established industries or procedures. In this regard, recent manuscripts in CKJ illustrate how proteomics is entering the clinical realm, with applications ranging from the identification of amyloid proteins in the pathology lab, to a new generation of urinary biomarkers for chronic kidney disease (CKD) assessment and outcome prediction. Indeed, one such panel of urinary peptidomics biomarkers, CKD273, recently received a Food and Drug Administration letter of support, the first ever in the CKD field. In addition, a must-read resource providing information on kidney disease-related proteomics and systems biology databases and how to access and use them in clinical decision-making was also recently published in CKJ .

  14. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  15. Calcium Isotope Analysis with "Peak Cut" Method on Column Chemistry

    NASA Astrophysics Data System (ADS)

    Zhu, H.; Zhang, Z.; Liu, F.; Li, X.

    2017-12-01

    To eliminate isobaric interferences from elemental and molecular isobars (e.g., 40K+, 48Ti+, 88Sr2+, 24Mg16O+, 27Al16O+) on Ca isotopes during mass determination, samples should be purified through ion-exchange column chemistry before analysis. However, large Ca isotopic fractionation has been observed during column chemistry (Russell and Papanastassiou, 1978; Zhu et al., 2016). Therefore, full recovery during column chemistry is greatly needed, otherwise uncertainties would be caused by poor recovery (Zhu et al., 2016). Generally, matrix effects could be enhanced by full recovery, as other elements might overlap with Ca cut during column chemistry. Matrix effects and full recovery are difficult to balance and both need to be considered for high-precision analysis of stable Ca isotopes. Here, we investigate the influence of poor recovery on δ44/40Ca using TIMS with the double spike technique. The δ44/40Ca values of IAPSO seawater, ML3B-G and BHVO-2 in different Ca subcats (e.g., 0-20, 20-40, 40-60, 60-80, 80-100%) with 20% Ca recovery on column chemistry display limited variation after correction by the 42Ca-43Ca double spike technique with the exponential law. Notably, δ44/40Ca of each Ca subcut is quite consistent with δ44/40Ca of Ca cut with full recovery within error. Our results indicate that the 42Ca-43Ca double spike technique can simultaneously correct both of the Ca isotopic fractionation that occurred during column chemistry and thermal ionization mass spectrometry (TIMS) determination properly, because both of the isotopic fractionation occurred during analysis follow the exponential law well. Therefore, we propose the "peak cut" method on Ca column chemistry for samples with complex matrix effects. Briefly, for samples with low Ca contents, we can add the double spike before column chemistry, and only collect the middle of the Ca eluate and abandon the both sides of Ca eluate that might overlap with other elements (e.g., K, Sr). This method would eliminate matrix effects and improve efficiency for the column chemistry.

  16. Changes in speed distribution: Applying aggregated safety effect models to individual vehicle speeds.

    PubMed

    Vadeby, Anna; Forsman, Åsa

    2017-06-01

    This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.

    ERIC Educational Resources Information Center

    Rowell, R. Kevin

    In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…

  18. n-Iterative Exponential Forgetting Factor for EEG Signals Parameter Estimation

    PubMed Central

    Palma Orozco, Rosaura

    2018-01-01

    Electroencephalograms (EEG) signals are of interest because of their relationship with physiological activities, allowing a description of motion, speaking, or thinking. Important research has been developed to take advantage of EEG using classification or predictor algorithms based on parameters that help to describe the signal behavior. Thus, great importance should be taken to feature extraction which is complicated for the Parameter Estimation (PE)–System Identification (SI) process. When based on an average approximation, nonstationary characteristics are presented. For PE the comparison of three forms of iterative-recursive uses of the Exponential Forgetting Factor (EFF) combined with a linear function to identify a synthetic stochastic signal is presented. The one with best results seen through the functional error is applied to approximate an EEG signal for a simple classification example, showing the effectiveness of our proposal. PMID:29568310

  19. Locality of the Thomas-Fermi-von Weizsäcker Equations

    NASA Astrophysics Data System (ADS)

    Nazar, F. Q.; Ortner, C.

    2017-06-01

    We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.

  20. Patient-based estimation of organ dose for a population of 58 adult patients across 13 protocol categories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahbaee, Pooyan, E-mail: psahbae@ncsu.edu; Segars, W. Paul; Samei, Ehsan

    2014-07-15

    Purpose: This study aimed to provide a comprehensive patient-specific organ dose estimation across a multiplicity of computed tomography (CT) examination protocols. Methods: A validated Monte Carlo program was employed to model a common CT system (LightSpeed VCT, GE Healthcare). The organ and effective doses were estimated from 13 commonly used body and neurological CT examination. The dose estimation was performed on 58 adult computational extended cardiac-torso phantoms (35 male, 23 female, mean age 51.5 years, mean weight 80.2 kg). The organ dose normalized by CTDI{sub vol} (h factor) and effective dose normalized by the dose length product (DLP) (k factor)more » were calculated from the results. A mathematical model was derived for the correlation between the h and k factors with the patient size across the protocols. Based on this mathematical model, a dose estimation iPhone operating system application was designed and developed to be used as a tool to estimate dose to the patients for a variety of routinely used CT examinations. Results: The organ dose results across all the protocols showed an exponential decrease with patient body size. The correlation was generally strong for the organs which were fully or partially located inside the scan coverage (Pearson sample correlation coefficient (r) of 0.49). The correlation was weaker for organs outside the scan coverage for which distance between the organ and the irradiation area was a stronger predictor of dose to the organ. For body protocols, the effective dose before and after normalization by DLP decreased exponentially with increasing patient's body diameter (r > 0.85). The exponential relationship between effective dose and patient's body diameter was significantly weaker for neurological protocols (r < 0.41), where the trunk length was a slightly stronger predictor of effective dose (0.15 < r < 0.46). Conclusions: While the most accurate estimation of a patient dose requires specific modeling of the patient anatomy, a first order approximation of organ and effective doses from routine CT scan protocols can be reasonably estimated using size specific factors. Estimation accuracy is generally poor for organ outside the scan range and for neurological protocols. The dose calculator designed in this study can be used to conveniently estimate and report the dose values for a patient across a multiplicity of CT scan protocols.« less

  1. Patient-specific Radiation Dose and Cancer Risk for Pediatric Chest CT

    PubMed Central

    Samei, Ehsan; Segars, W. Paul; Sturgeon, Gregory M.; Colsher, James G.; Frush, Donald P.

    2011-01-01

    Purpose: To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. Materials and Methods: The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0–16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDIvol) or dose–length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Results: Organ dose normalized by tube current–time product or CTDIvol decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current–time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current–time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). Conclusion: The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. © RSNA, 2011 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1 PMID:21467251

  2. Patient-specific radiation dose and cancer risk for pediatric chest CT.

    PubMed

    Li, Xiang; Samei, Ehsan; Segars, W Paul; Sturgeon, Gregory M; Colsher, James G; Frush, Donald P

    2011-06-01

    To estimate patient-specific radiation dose and cancer risk for pediatric chest computed tomography (CT) and to evaluate factors affecting dose and risk, including patient size, patient age, and scanning parameters. The institutional review board approved this study and waived informed consent. This study was HIPAA compliant. The study included 30 patients (0-16 years old), for whom full-body computer models were recently created from clinical CT data. A validated Monte Carlo program was used to estimate organ dose from eight chest protocols, representing clinically relevant combinations of bow tie filter, collimation, pitch, and tube potential. Organ dose was used to calculate effective dose and risk index (an index of total cancer incidence risk). The dose and risk estimates before and after normalization by volume-weighted CT dose index (CTDI(vol)) or dose-length product (DLP) were correlated with patient size and age. The effect of each scanning parameter was studied. Organ dose normalized by tube current-time product or CTDI(vol) decreased exponentially with increasing average chest diameter. Effective dose normalized by tube current-time product or DLP decreased exponentially with increasing chest diameter. Chest diameter was a stronger predictor of dose than weight and total scan length. Risk index normalized by tube current-time product or DLP decreased exponentially with both chest diameter and age. When normalized by DLP, effective dose and risk index were independent of collimation, pitch, and tube potential (<10% variation). The correlations of dose and risk with patient size and age can be used to estimate patient-specific dose and risk. They can further guide the design and optimization of pediatric chest CT protocols. http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101900/-/DC1. RSNA, 2011

  3. A fuzzy adaptive network approach to parameter estimation in cases where independent variables come from an exponential distribution

    NASA Astrophysics Data System (ADS)

    Dalkilic, Turkan Erbay; Apaydin, Aysen

    2009-11-01

    In a regression analysis, it is assumed that the observations come from a single class in a data cluster and the simple functional relationship between the dependent and independent variables can be expressed using the general model; Y=f(X)+[epsilon]. However; a data cluster may consist of a combination of observations that have different distributions that are derived from different clusters. When faced with issues of estimating a regression model for fuzzy inputs that have been derived from different distributions, this regression model has been termed the [`]switching regression model' and it is expressed with . Here li indicates the class number of each independent variable and p is indicative of the number of independent variables [J.R. Jang, ANFIS: Adaptive-network-based fuzzy inference system, IEEE Transaction on Systems, Man and Cybernetics 23 (3) (1993) 665-685; M. Michel, Fuzzy clustering and switching regression models using ambiguity and distance rejects, Fuzzy Sets and Systems 122 (2001) 363-399; E.Q. Richard, A new approach to estimating switching regressions, Journal of the American Statistical Association 67 (338) (1972) 306-310]. In this study, adaptive networks have been used to construct a model that has been formed by gathering obtained models. There are methods that suggest the class numbers of independent variables heuristically. Alternatively, in defining the optimal class number of independent variables, the use of suggested validity criterion for fuzzy clustering has been aimed. In the case that independent variables have an exponential distribution, an algorithm has been suggested for defining the unknown parameter of the switching regression model and for obtaining the estimated values after obtaining an optimal membership function, which is suitable for exponential distribution.

  4. Taming the runaway problem of inflationary landscapes

    NASA Astrophysics Data System (ADS)

    Hall, Lawrence J.; Watari, Taizan; Yanagida, T. T.

    2006-05-01

    A wide variety of vacua, and their cosmological realization, may provide an explanation for the apparently anthropic choices of some parameters of particle physics and cosmology. If the probability on various parameters is weighted by volume, a flat potential for slow-roll inflation is also naturally understood, since the flatter the potential the larger the volume of the subuniverse. However, such inflationary landscapes have a serious problem, predicting an environment that makes it exponentially hard for observers to exist and giving an exponentially small probability for a moderate universe like ours. A general solution to this problem is proposed, and is illustrated in the context of inflaton decay and leptogenesis, leading to an upper bound on the reheating temperature in our subuniverse. In a particular scenario of chaotic inflation and nonthermal leptogenesis, predictions can be made for the size of CP violating phases, the rate of neutrinoless double beta decay and, in the case of theories with gauge-mediated weak-scale supersymmetry, for the fundamental scale of supersymmetry breaking.

  5. An improved cyan fluorescent protein variant useful for FRET.

    PubMed

    Rizzo, Mark A; Springer, Gerald H; Granada, Butch; Piston, David W

    2004-04-01

    Many genetically encoded biosensors use Förster resonance energy transfer (FRET) between fluorescent proteins to report biochemical phenomena in living cells. Most commonly, the enhanced cyan fluorescent protein (ECFP) is used as the donor fluorophore, coupled with one of several yellow fluorescent protein (YFP) variants as the acceptor. ECFP is used despite several spectroscopic disadvantages, namely a low quantum yield, a low extinction coefficient and a fluorescence lifetime that is best fit by a double exponential. To improve the characteristics of ECFP for FRET measurements, we used a site-directed mutagenesis approach to overcome these disadvantages. The resulting variant, which we named Cerulean (ECFP/S72A/Y145A/H148D), has a greatly improved quantum yield, a higher extinction coefficient and a fluorescence lifetime that is best fit by a single exponential. Cerulean is 2.5-fold brighter than ECFP and replacement of ECFP with Cerulean substantially improves the signal-to-noise ratio of a FRET-based sensor for glucokinase activation.

  6. {phi} meson production in Au + Au and p + p collisions at {radical}s{sub NN}=200 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, J.; Adler, C.; Aggarwal, M.M.

    2004-06-01

    We report the STAR measurement of {psi} meson production in Au + Au and p + p collisions at {radical}s{sub NN} = 200 GeV. Using the event mixing technique, the {psi} spectra and yields are obtained at midrapidity for five centrality bins in Au+Au collisions and for non-singly-diffractive p+p collisions. It is found that the {psi} transverse momentum distributions from Au+Au collisions are better fitted with a single-exponential while the p+p spectrum is better described by a double-exponential distribution. The measured nuclear modification factors indicate that {psi} production in central Au+Au collisions is suppressed relative to peripheral collisions when scaledmore » by the number of binary collisions (). The systematics of versus centrality and the constant {psi}/K{sup -} ratio versus beam species, centrality, and collision energy rule out kaon coalescence as the dominant mechanism for {psi} production.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rout, Dipak; Vijaya, R.; Centre for Lasers and Photonics, Indian Institute of Technology Kanpur, Kanpur 208016

    Well-ordered opaline photonic crystals are grown by inward growing self-assembly method from Rhodamine B dye-doped polystyrene colloids. Subsequent to self-assembly, the crystals are infiltrated with gold nanoparticles of 40 nm diameter. Measurements of the stopband features and photoluminescence intensity from these crystals are supplemented by fluorescence decay time analysis. The fluorescence decay times from the dye-doped photonic crystals before and after the infiltration are dramatically different from each other. A lowered fluorescence decay time was observed for the case of gold infiltrated crystal along with an enhanced emission intensity. Double-exponential decay nature of the fluorescence from the dye-doped crystal gets convertedmore » into single-exponential decay upon the infiltration of gold nanoparticles due to the resonant radiative process resulting from the overlap of the surface plasmon resonance with the emission spectrum. The influence of localized surface plasmon due to gold nanoparticles on the increase in emission intensity and decrease in decay time of the emitters is established.« less

  8. Structural Configuration Systems Analysis for Advanced Aircraft Fuselage Concepts

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D.

    2016-01-01

    Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation.

  9. Estimating acreage by double sampling using LANDSAT data

    NASA Technical Reports Server (NTRS)

    Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)

    1982-01-01

    Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.

  10. Tidal Volume Estimation Using the Blanket Fractal Dimension of the Tracheal Sounds Acquired by Smartphone

    PubMed Central

    Reljin, Natasa; Reyes, Bersain A.; Chon, Ki H.

    2015-01-01

    In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days. PMID:25923929

  11. Tidal volume estimation using the blanket fractal dimension of the tracheal sounds acquired by smartphone.

    PubMed

    Reljin, Natasa; Reyes, Bersain A; Chon, Ki H

    2015-04-27

    In this paper, we propose the use of blanket fractal dimension (BFD) to estimate the tidal volume from smartphone-acquired tracheal sounds. We collected tracheal sounds with a Samsung Galaxy S4 smartphone, from five (N = 5) healthy volunteers. Each volunteer performed the experiment six times; first to obtain linear and exponential fitting models, and then to fit new data onto the existing models. Thus, the total number of recordings was 30. The estimated volumes were compared to the true values, obtained with a Respitrace system, which was considered as a reference. Since Shannon entropy (SE) is frequently used as a feature in tracheal sound analyses, we estimated the tidal volume from the same sounds by using SE as well. The evaluation of the performed estimation, using BFD and SE methods, was quantified by the normalized root-mean-squared error (NRMSE). The results show that the BFD outperformed the SE (at least twice smaller NRMSE was obtained). The smallest NRMSE error of 15.877% ± 9.246% (mean ± standard deviation) was obtained with the BFD and exponential model. In addition, it was shown that the fitting curves calculated during the first day of experiments could be successfully used for at least the five following days.

  12. A Novel Method for Age Estimation in Solar-Type Stars Through GALEX FUV Magnitudes

    NASA Astrophysics Data System (ADS)

    Ho, Kelly; Subramonian, Arjun; Smith, Graeme; Shouru Shieh

    2018-01-01

    Utilizing an inverse association known to exist between Galaxy Evolution Explorer (GALEX) far ultraviolet (FUV) magnitudes and the chromospheric activity of F, G, and K dwarfs, we explored a method of age estimation in solar-type stars through GALEX FUV magnitudes. Sample solar-type star data were collected from refereed publications and filtered by B-V and absolute visual magnitude to ensure similarities in temperature and luminosity to the Sun. We determined FUV-B and calculated a residual index Q for all the stars, using the temperature-induced upper bound on FUV-B as the fiducial. Plotting current age estimates for the stars against Q, we discovered a strong and significant association between the variables. By applying a log-linear transformation to the data to produce a strong correlation between Q and loge Age, we confirmed the association between Q and age to be exponential. Thus, least-squares regression was used to generate an exponential model relating Q to age in solar-type stars, which can be used by astronomers. The Q-method of stellar age estimation is simple and more efficient than existing spectroscopic methods and has applications to galactic archaeology and stellar chemical composition analysis.

  13. Constraining estimates of global soil respiration by quantifying sources of variability.

    PubMed

    Jian, Jinshi; Steele, Meredith K; Thomas, R Quinn; Day, Susan D; Hodges, Steven C

    2018-05-10

    Quantifying global soil respiration (R SG ) and its response to temperature change are critical for predicting the turnover of terrestrial carbon stocks and their feedbacks to climate change. Currently, estimates of R SG range from 68 to 98 Pg C year -1 , causing considerable uncertainty in the global carbon budget. We argue the source of this variability lies in the upscaling assumptions regarding the model format, data timescales, and precipitation component. To quantify the variability and constrain R SG , we developed R SG models using Random Forest and exponential models, and used different timescales (daily, monthly, and annual) of soil respiration (R S ) and climate data to predict R SG . From the resulting R SG estimates (range = 66.62-100.72 Pg), we calculated variability associated with each assumption. Among model formats, using monthly R S data rather than annual data decreased R SG by 7.43-9.46 Pg; however, R SG calculated from daily R S data was only 1.83 Pg lower than the R SG from monthly data. Using mean annual precipitation and temperature data instead of monthly data caused +4.84 and -4.36 Pg C differences, respectively. If the timescale of R S data is constant, R SG estimated by the first-order exponential (93.2 Pg) was greater than the Random Forest (78.76 Pg) or second-order exponential (76.18 Pg) estimates. These results highlight the importance of variation at subannual timescales for upscaling to R SG . The results indicated R SG is lower than in recent papers and the current benchmark for land models (98 Pg C year -1 ), and thus may change the predicted rates of terrestrial carbon turnover and the carbon to climate feedback as global temperatures rise. © 2018 John Wiley & Sons Ltd.

  14. Characterizing the reproduction number of epidemics with early subexponential growth dynamics

    PubMed Central

    Viboud, Cécile; Simonsen, Lone; Moghadas, Seyed M.

    2016-01-01

    Early estimates of the transmission potential of emerging and re-emerging infections are increasingly used to inform public health authorities on the level of risk posed by outbreaks. Existing methods to estimate the reproduction number generally assume exponential growth in case incidence in the first few disease generations, before susceptible depletion sets in. In reality, outbreaks can display subexponential (i.e. polynomial) growth in the first few disease generations, owing to clustering in contact patterns, spatial effects, inhomogeneous mixing, reactive behaviour changes or other mechanisms. Here, we introduce the generalized growth model to characterize the early growth profile of outbreaks and estimate the effective reproduction number, with no need for explicit assumptions about the shape of epidemic growth. We demonstrate this phenomenological approach using analytical results and simulations from mechanistic models, and provide validation against a range of empirical disease datasets. Our results suggest that subexponential growth in the early phase of an epidemic is the rule rather the exception. Mechanistic simulations show that slight modifications to the classical susceptible–infectious–removed model result in subexponential growth, and in turn a rapid decline in the reproduction number within three to five disease generations. For empirical outbreaks, the generalized-growth model consistently outperforms the exponential model for a variety of directly and indirectly transmitted diseases datasets (pandemic influenza, measles, smallpox, bubonic plague, cholera, foot-and-mouth disease, HIV/AIDS and Ebola) with model estimates supporting subexponential growth dynamics. The rapid decline in effective reproduction number predicted by analytical results and observed in real and synthetic datasets within three to five disease generations contrasts with the expectation of invariant reproduction number in epidemics obeying exponential growth. The generalized-growth concept also provides us a compelling argument for the unexpected extinction of certain emerging disease outbreaks during the early ascending phase. Overall, our approach promotes a more reliable and data-driven characterization of the early epidemic phase, which is important for accurate estimation of the reproduction number and prediction of disease impact. PMID:27707909

  15. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-11-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach has been justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatlands sites in Finland and a tundra site in Siberia. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. However, a rather large percentage of the exponential regression functions showed curvatures not consistent with the theoretical model which is considered to be caused by violations of the underlying model assumptions. Especially the effects of turbulence and pressure disturbances by the chamber deployment are suspected to have caused unexplainable curvatures. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes. The degree of underestimation increased with increasing CO2 flux strength and was dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  16. Growth and differentiation of human lens epithelial cells in vitro on matrix

    NASA Technical Reports Server (NTRS)

    Blakely, E. A.; Bjornstad, K. A.; Chang, P. Y.; McNamara, M. P.; Chang, E.; Aragon, G.; Lin, S. P.; Lui, G.; Polansky, J. R.

    2000-01-01

    PURPOSE: To characterize the growth and maturation of nonimmortalized human lens epithelial (HLE) cells grown in vitro. METHODS: HLE cells, established from 18-week prenatal lenses, were maintained on bovine corneal endothelial (BCE) extracellular matrix (ECM) in medium supplemented with basic fibroblast growth factor (FGF-2). The identity, growth, and differentiation of the cultures were characterized by karyotyping, cell morphology, and growth kinetics studies, reverse transcription-polymerase chain reaction (RT-PCR), immunofluorescence, and Western blot analysis. RESULTS: HLE cells had a male, human diploid (2N = 46) karyotype. The population-doubling time of exponentially growing cells was 24 hours. After 15 days in culture, cell morphology changed, and lentoid formation was evident. Reverse transcription-polymerase chain reaction (RT-PCR) indicated expression of alphaA- and betaB2-crystallin, fibroblast growth factor receptor 1 (FGFR1), and major intrinsic protein (MIP26) in exponential growth. Western analyses of protein extracts show positive expression of three immunologically distinct classes of crystallin proteins (alphaA-, alphaB-, and betaB2-crystallin) with time in culture. By Western blot analysis, expression of p57(KIP2), a known marker of terminally differentiated fiber cells, was detectable in exponential cultures, and levels increased after confluence. MIP26 and gamma-crystallin protein expression was detected in confluent cultures, by using immunofluorescence, but not in exponentially growing cells. CONCLUSIONS: HLE cells can be maintained for up to 4 months on ECM derived from BCE cells in medium containing FGF-2. With time in culture, the cells demonstrate morphologic characteristics of, and express protein markers for, lens fiber cell differentiation. This in vitro model will be useful for investigations of radiation-induced cataractogenesis and other studies of lens toxicity.

  17. Probability distribution functions for intermittent scrape-off layer plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-03-01

    A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.

  18. A method for the estimation of dual transmissivities from slug tests

    NASA Astrophysics Data System (ADS)

    Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz

    2018-03-01

    Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.

  19. Estimation of the light field inside photosynthetic microorganism cultures through Mittag-Leffler functions at depleted light conditions

    NASA Astrophysics Data System (ADS)

    Fuente, David; Lizama, Carlos; Urchueguía, Javier F.; Conejero, J. Alberto

    2018-01-01

    Light attenuation within suspensions of photosynthetic microorganisms has been widely described by the Lambert-Beer equation. However, at depths where most of the light has been absorbed by the cells, light decay deviates from the exponential behaviour and shows a lower attenuation than the corresponding from the purely exponential fall. This discrepancy can be modelled through the Mittag-Leffler function, extending Lambert-Beer law via a tuning parameter α that takes into account the attenuation process. In this work, we describe a fractional Lambert-Beer law to estimate light attenuation within cultures of model organism Synechocystis sp. PCC 6803. Indeed, we benchmark the measured light field inside cultures of two different Synechocystis strains, namely the wild-type and the antenna mutant strain called Olive at five different cell densities, with our in silico results. The Mittag-Leffler hyper-parameter α that best fits the data is 0.995, close to the exponential case. One of the most striking results to emerge from this work is that unlike prior literature on the subject, this one provides experimental evidence on the validity of fractional calculus for determining the light field. We show that by applying the fractional Lambert-Beer law for describing light attenuation, we are able to properly model light decay in photosynthetic microorganisms suspensions.

  20. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  1. Some properties of the Catalan-Qi function related to the Catalan numbers.

    PubMed

    Qi, Feng; Mahmoud, Mansour; Shi, Xiao-Ting; Liu, Fang-Fang

    2016-01-01

    In the paper, the authors find some properties of the Catalan numbers, the Catalan function, and the Catalan-Qi function which is a generalization of the Catalan numbers. Concretely speaking, the authors present a new expression, asymptotic expansions, integral representations, logarithmic convexity, complete monotonicity, minimality, logarithmically complete monotonicity, a generating function, and inequalities of the Catalan numbers, the Catalan function, and the Catalan-Qi function. As by-products, an exponential expansion and a double inequality for the ratio of two gamma functions are derived.

  2. Is a matrix exponential specification suitable for the modeling of spatial correlation structures?

    PubMed Central

    Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha

    2018-01-01

    This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375

  3. Post-outburst X-Ray Flux and Timing Evolution of Swift J1822.3-1606

    NASA Astrophysics Data System (ADS)

    Scholz, P.; Ng, C.-Y.; Livingstone, M. A.; Kaspi, V. M.; Cumming, A.; Archibald, R. F.

    2012-12-01

    Swift J1822.3-1606 was discovered on 2011 July 14 by the Swift Burst Alert Telescope following the detection of several bursts. The source was found to have a period of 8.4377 s and was identified as a magnetar. Here we present a phase-connected timing analysis and the evolution of the flux and spectral properties using Rossi X-ray Timing Explorer, Swift, and Chandra observations. We measure a spin frequency of 0.1185154343(8) s-1 and a frequency derivative of -4.3 ± 0.3 × 10-15 at MJD 55761.0, in a timing analysis that includes significant non-zero second and third frequency derivatives that we attribute to timing noise. This corresponds to an estimated spin-down inferred dipole magnetic field of B ~ 5 × 1013 G, consistent with previous estimates though still possibly affected by unmodeled noise. We find that the post-outburst 1-10 keV flux evolution can be characterized by a double-exponential decay with decay timescales of 15.5 ± 0.5 and 177 ± 14 days. We also fit the light curve with a crustal cooling model, which suggests that the cooling results from heat injection into the outer crust. We find that the hardness-flux correlation observed in magnetar outbursts also characterizes the outburst of Swift J1822.3-1606. We compare the properties of Swift J1822.3-1606 with those of other magnetars and their outbursts.

  4. Discrete sudden perturbation theory for inelastic scattering. I. Quantum and semiclassical treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cross, R.J.

    1985-12-01

    A double perturbation theory is constructed to treat rotationally and vibrationally inelastic scattering. It uses both the elastic scattering from the spherically averaged potential and the infinite-order sudden (IOS) approximation as the unperturbed solutions. First, a standard perturbation expansion is done to express the radial wave functions in terms of the elastic wave functions. The resulting coupled equations are transformed to the discrete-variable representation where the IOS equations are diagonal. Then, the IOS solutions are removed from the equations which are solved by an exponential perturbation approximation. The results for Ar+N/sub 2/ are very much more accurate than the IOSmore » and somewhat more accurate than a straight first-order exponential perturbation theory. The theory is then converted into a semiclassical, time-dependent form by using the WKB approximation. The result is an integral of the potential times a slowly oscillating factor over the classical trajectory. A method of interpolating the result is given so that the calculation is done at the average velocity for a given transition. With this procedure, the semiclassical version of the theory is more accurate than the quantum version and very much faster. Calculations on Ar+N/sub 2/ show the theory to be much more accurate than the infinite-order sudden (IOS) approximation and the exponential time-dependent perturbation theory.« less

  5. Double-survey estimates of bald eagle populations in Oregon

    USGS Publications Warehouse

    Anthony, R.G.; Garrett, Monte G.; Isaacs, F.B.

    1999-01-01

    The literature on abundance of birds of prey is almost devoid of population estimates with statistical rigor. Therefore, we surveyed bald eagle (Haliaeetus leucocephalus) populations on the Crooked and lower Columbia rivers of Oregon and used the double-survey method to estimate populations and sighting probabilities for different survey methods (aerial, boat, vehicle) and bald eagle ages (adults vs. subadults). Sighting probabilities were consistently 20%. The results revealed variable and negative bias (percent relative bias = -9 to -70%) of direct counts and emphasized the importance of estimating populations where some measure of precision and ability to conduct inference tests are available. We recommend use of the double-survey method to estimate abundance of bald eagle populations and other raptors in open habitats.

  6. Parameter estimation by decoherence in the double-slit experiment

    NASA Astrophysics Data System (ADS)

    Matsumura, Akira; Ikeda, Taishi; Kukita, Shingo

    2018-06-01

    We discuss a parameter estimation problem using quantum decoherence in the double-slit interferometer. We consider a particle coupled to a massive scalar field after the particle passing through the double slit and solve the dynamics non-perturbatively for the coupling by the WKB approximation. This allows us to analyze the estimation problem which cannot be treated by master equation used in the research of quantum probe. In this model, the scalar field reduces the interference fringes of the particle and the fringe pattern depends on the field mass and coupling. To evaluate the contrast and the estimation precision obtained from the pattern, we introduce the interferometric visibility and the Fisher information matrix of the field mass and coupling. For the fringe pattern observed on the distant screen, we derive a simple relation between the visibility and the Fisher matrix. Also, focusing on the estimation precision of the mass, we find that the Fisher information characterizes the wave-particle duality in the double-slit interferometer.

  7. Comparison of estimation techniques for a forest inventory in which double sampling for stratification is used

    Treesearch

    Michael S. Williams

    2001-01-01

    A number of different estimators can be used when forest inventory plots cover two or more distinctly different condition classes. In this article the properties of two approximate Horvitz- Thompson (HT) estimators, a ratio of means (RM), and a mean of ratios (MR) estimator are explored in the framework of double sampling for stratification. Relevant theoretical...

  8. A modified exponential behavioral economic demand model to better describe consumption data.

    PubMed

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  9. Pressure resistance of cold-shocked Escherichia coli O157:H7 in ground beef, beef gravy and peptone water.

    PubMed

    Baccus-Taylor, G S H; Falloon, O C; Henry, N

    2015-06-01

    (i) To study the effects of cold shock on Escherichia coli O157:H7 cells. (ii) To determine if cold-shocked E. coli O157:H7 cells at stationary and exponential phases are more pressure-resistant than their non-cold-shocked counterparts. (iii) To investigate the baro-protective role of growth media (0·1% peptone water, beef gravy and ground beef). Quantitative estimates of lethality and sublethal injury were made using the differential plating method. There were no significant differences (P > 0·05) in the number of cells killed; cold-shocked or non-cold-shocked. Cells grown in ground beef (stationary and exponential phases) experienced lowest death compared with peptone water and beef gravy. Cold-shock treatment increased the sublethal injury to cells cultured in peptone water (stationary and exponential phases) and ground beef (exponential phase), but decreased the sublethal injury to cells in beef gravy (stationary phase). Cold shock did not confer greater resistance to stationary or exponential phase cells pressurized in peptone water, beef gravy or ground beef. Ground beef had the greatest baro-protective effect. Real food systems should be used in establishing food safety parameters for high-pressure treatments; micro-organisms are less resistant in model food systems, the use of which may underestimate the organisms' resistance. © 2015 The Society for Applied Microbiology.

  10. Profiles of lead in urban dust and the effect of the distance to multi-industry in an old heavy industry city in China.

    PubMed

    Yu, Yang; Li, Yingxia; Li, Ben; Shen, Zhenyao; Stenstrom, Michael K

    2017-03-01

    Lead (Pb) concentration in urban dust is often higher than background concentrations and can result in a wide range of health risks to local communities. To understand Pb distribution in urban dust and how multi-industrial activity affects Pb concentration, 21 sampling sites within the heavy industry city of Jilin, China, were analyzed for Pb concentration. Pb concentrations of all 21 urban dust samples from the Jilin City Center were higher than the background concentration for soil in Jilin Province. The analyses show that distance to industry is an important parameter determining health risks associated with Pb in urban dust. The Pb concentration showed an exponential decrease, with increasing distance from industry. Both maximum likelihood estimation and Bayesian analysis were used to estimate the exponential relationship between Pb concentration and distance to multi-industry areas. We found that Bayesian analysis was a better method with less uncertainty for estimating Pb dust concentrations based on their distance to multi-industry, and this approach is recommended for further study. Copyright © 2016. Published by Elsevier Inc.

  11. Effect of temperature on life history parameters of adult Culicoides sonorensis (Diptera: Ceratopogonidae) in relation to geographic origin and vectorial capacity for bluetongue virus.

    PubMed

    Lysyk, T J; Danyk, T

    2007-09-01

    The effect of temperature on survival, oviposition, gonotrophic development, and a life history factor of vectorial capacity were examined in adult Culicoides sonorensis (Wirth & Jones) (Diptera: Ceratopogonidae) that originated from two geographic locations. Flies originating from the United States (Colorado) had slightly reduced survival after a bloodmeal compared with wild flies collected in southern Alberta (AB), Canada. Survival of AB flies declined in a curvilinear manner with temperature, whereas survival of U.S. flies showed a linear response to temperature. The survivorship curve of the AB flies more closely followed a Weibull distribution than an exponential, indicating survival was age-dependent. Survivorship of the U.S. flies followed an exponential distribution. Females from both sources laid similar numbers of eggs throughout their life. The first eggs were laid by females from both sources at 31.9 degree-day (DD)9.3. Dissections of blood-fed flies reared at various temperatures indicated that flies from both sources were 90% gravid at 32 DD9.3. Relationships among temperature and life history components of vectorial capacity were similar among flies from the two sources and indicated that vectorial capacity would be approximately 1.8-2.6-fold greater in a southern U.S. climate compared with southwestern Canada due solely to the effects of temperature on the life history of C. sonorensis. Using life history estimates derived from Weibull model had little effect on estimating vectorial capacity, whereas using estimates derived from the exponential model slightly overestimated vectorial capacity.

  12. Burden of Type 2 Diabetes in Mexico: Past, Current and Future Prevalence and Incidence Rates

    PubMed Central

    Meza, Rafael; Barrientos-Gutierrez, Tonatiuh; Rojas-Martinez, Rosalba; Reynoso-Noverón, Nancy; Palacio-Mejia, Lina Sofia; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio

    2015-01-01

    Introduction Mexico diabetes prevalence has increased dramatically in recent years. However, no national incidence estimates exist, hampering the assessment of diabetes trends and precluding the development of burden of disease analyses to inform public health policy decision-making. Here we provide evidence regarding current magnitude of diabetes in Mexico and its future trends. Methods We used data from the Mexico National Health and Nutrition Survey, and age-period-cohort models to estimate prevalence and incidence of self-reported diagnosed diabetes by age, sex, calendar-year (1960–2012), and birth-cohort (1920–1980). We project future rates under three alternative incidence scenarios using demographic projections of the Mexican population from 2010–2050 and a Multi-cohort Diabetes Markov Model. Results Adult (ages 20+) diagnosed diabetes prevalence in Mexico increased from 7% to 8.9% from 2006 to 2012. Diabetes prevalence increases with age, peaking around ages 65–68 to then decrease. Age-specific incidence follows similar patterns, but peaks around ages 57–59. We estimate that diagnosed diabetes incidence increased exponentially during 1960–2012, roughly doubling every 10 years. Projected rates under three age-specific incidence scenarios suggest diabetes prevalence among adults (ages 20+) may reach 13.7–22.5% by 2050, affecting 15–25 million individuals, with a lifetime risk of 1 in 3 to 1 in 2. Conclusions Diabetes prevalence in Mexico will continue to increase even if current incidence rates remain unchanged. Continued implementation of policies to reduce obesity rates, increase physical activity, and improve population diet, in tandem with diabetes surveillance and other risk control measures is paramount to substantially reduce the burden of diabetes in Mexico. PMID:26546108

  13. Burden of type 2 diabetes in Mexico: past, current and future prevalence and incidence rates.

    PubMed

    Meza, Rafael; Barrientos-Gutierrez, Tonatiuh; Rojas-Martinez, Rosalba; Reynoso-Noverón, Nancy; Palacio-Mejia, Lina Sofia; Lazcano-Ponce, Eduardo; Hernández-Ávila, Mauricio

    2015-12-01

    Mexico diabetes prevalence has increased dramatically in recent years. However, no national incidence estimates exist, hampering the assessment of diabetes trends and precluding the development of burden of disease analyses to inform public health policy decision-making. Here we provide evidence regarding current magnitude of diabetes in Mexico and its future trends. We used data from the Mexico National Health and Nutrition Survey, and age-period-cohort models to estimate prevalence and incidence of self-reported diagnosed diabetes by age, sex, calendar-year (1960-2012), and birth-cohort (1920-1980). We project future rates under three alternative incidence scenarios using demographic projections of the Mexican population from 2010-2050 and a Multi-cohort Diabetes Markov Model. Adult (ages 20+) diagnosed diabetes prevalence in Mexico increased from 7% to 8.9% from 2006 to 2012. Diabetes prevalence increases with age, peaking around ages 65-68 to then decrease. Age-specific incidence follows similar patterns, but peaks around ages 57-59. We estimate that diagnosed diabetes incidence increased exponentially during 1960-2012, roughly doubling every 10 years. Projected rates under three age-specific incidence scenarios suggest diabetes prevalence among adults (ages 20+) may reach 13.7-22.5% by 2050, affecting 15-25 million individuals, with a lifetime risk of 1 in 3 to 1 in 2. Diabetes prevalence in Mexico will continue to increase even if current incidence rates remain unchanged. Continued implementation of policies to reduce obesity rates, increase physical activity, and improve population diet, in tandem with diabetes surveillance and other risk control measures is paramount to substantially reduce the burden of diabetes in Mexico. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Effect of ethanol variation on the internal environment of sol-gel bulk and thin films with aging.

    PubMed

    Gupta, R; Mozumdar, S; Chaudhury, N K

    2005-10-15

    Sol-gel derived bulk and thin films were prepared from different compositions at low pH ( approximately 2.0) containing varying concentrations of ethanol from 15 to 60% at constant water (H(2)O)/tetraethyl-orthosilicate (TEOS) ratio (R=4). The fluorescence microscopic and spectroscopic measurements on fluorescent probe, Hoechst 33258 (H258) entrapped in these compositions were carried out at different days of storage to monitor the effects of concentration of ethanol on the internal environment of sol-gel materials. Fluorescence microscopic observations on sol-gel thin films, prepared by dip coating technique depicted uniform and cracked surface at withdrawal speed 1cm/min (high speed) and 0.1cm/min (low speed) respectively, which did not change during aging. Fluorescence spectral measurements showed emission maximum of H258 at approximately 535 nm in fresh sols at all concentrations of ethanol which depicted slight blue shift to 512 nm during aging in bulk. No such spectral shift has been observed in sol-gel thin films coated at high speed whereas thin films coated at low speed clearly showed an additional band at approximately 404 nm at 45 and 60% concentration of ethanol after about one month of storage. Analysis of the fluorescence lifetime data indicated single exponential decay (1.6-1.8 ns) in fresh sol and from third day onwards, invariably double exponential decay with a short (tau(1)) and a long (tau(2)) component were observed in sol-gel bulk with a dominant tau(1) at approximately 1.2 ns at all concentrations of ethanol. A double exponential decay consisting of a short component (tau(1)) at approximately 0.2 ns and a long component (tau(2)) at approximately 3.5 ns were observed at all ethanol concentrations in both fresh and aged sol-gel thin films. Further, distribution analysis of lifetimes of H258 showed two mean lifetimes with increased width in aged bulk and thin films. These results are likely to have strong implications in designing the internal environment for applications in biosensors.

  15. Estimating the decline in excess risk of cerebrovascular disease following quitting smoking--a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Thornton, Alison J

    2014-02-01

    We attempted to quantify the decline in stroke risk following quitting using the negative exponential model, with methodology previously employed for IHD. We identified 22 blocks of RRs (from 13 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We tried to estimate the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block. The method failed to converge or produced very variable estimates of H in nine blocks with a current smoker RR <1.40. Rejecting these, and combining blocks by amount smoked in one study where problems arose in model-fitting, the final analyses used 11 blocks. Goodness-of-fit was adequate for each block, the combined estimate of H being 4.78(95%CI 2.17-10.50) years. However, considerable heterogeneity existed, unexplained by any factor studied, with the random-effects estimate 3.08(1.32-7.16). Sensitivity analyses allowing for reverse causation or differing assumed times for the final quitting period gave similar results. The estimates of H are similar for stroke and IHD, and the individual estimates similarly heterogeneous. Fitting the model is harder for stroke, due to its weaker association with smoking. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Very low threshold-current temperature sensitivity in constricted double-heterojunction AlGaAs lasers

    NASA Technical Reports Server (NTRS)

    Botez, D.; Connolly, J. C.; Gilbert, D. B.; Ettenberg, M.

    1981-01-01

    The temperature dependence of threshold currents in constricted double-heterojunction diode lasers with strong lateral mode confinement is found to be significantly milder than for other types of lasers. The threshold-current relative variations with ambient temperature are typically two to three times less than for other devices of CW-operation capability. Over the interval 10-70 C the threshold currents fit the empirical exponential law exp/(T2-T1)/T0/ with T0 values in the 240-375 C range in pulsed operation, and in the 200-310 C range in CW operation. The external differential quantum efficiency and the mode far-field pattern near threshold are virtually invariant with temperature. The possible causes of high-T0 behavior are analyzed, and a new phenomenon - temperature-dependent current focusing - is presented to explain the results.

  17. Applying elliptic curve cryptography to a chaotic synchronisation system: neural-network-based approach

    NASA Astrophysics Data System (ADS)

    Hsiao, Feng-Hsiag

    2017-10-01

    In order to obtain double encryption via elliptic curve cryptography (ECC) and chaotic synchronisation, this study presents a design methodology for neural-network (NN)-based secure communications in multiple time-delay chaotic systems. ECC is an asymmetric encryption and its strength is based on the difficulty of solving the elliptic curve discrete logarithm problem which is a much harder problem than factoring integers. Because it is much harder, we can get away with fewer bits to provide the same level of security. To enhance the strength of the cryptosystem, we conduct double encryption that combines chaotic synchronisation with ECC. According to the improved genetic algorithm, a fuzzy controller is synthesised to realise the exponential synchronisation and achieves optimal H∞ performance by minimising the disturbances attenuation level. Finally, a numerical example with simulations is given to demonstrate the effectiveness of the proposed approach.

  18. Impact of double counting and transfer bias on estimated rates and outcomes of acute myocardial infarction.

    PubMed

    Westfall, J M; McGloin, J

    2001-05-01

    Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double counting and transfer bias should be considered when conducting and interpreting research on ischemic heart disease, particularly in rural regions.

  19. Why Was Kelvin's Estimate of the Earth's Age Wrong?

    ERIC Educational Resources Information Center

    Lovatt, Ian; Syed, M. Qasim

    2014-01-01

    This is a companion to our previous paper in which we give a published example, based primarily on Perry's work, of a graph of ln "y" versus "t" when "y" is an exponential function of "t". This work led us to the idea that Lord Kelvin's (William Thomson's) estimate of the Earth's age was…

  20. Evidence for a scale-limited low-frequency earthquake source process

    NASA Astrophysics Data System (ADS)

    Chestler, S. R.; Creager, K. C.

    2017-04-01

    We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.

  1. Asymptotic stability estimates near an equilibrium point

    NASA Astrophysics Data System (ADS)

    Dumas, H. Scott; Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia

    2017-07-01

    We use the error bounds for adiabatic invariants found in the work of Chartier, Murua and Sanz-Serna [3] to bound the solutions of a Hamiltonian system near an equilibrium over exponentially long times. Our estimates depend only on the linearized system and not on the higher order terms as in KAM theory, nor do we require any steepness or convexity conditions as in Nekhoroshev theory. We require that the equilibrium point where our estimate applies satisfy a type of formal stability called Lie stability.

  2. Analysis and Modeling of Echolocation Signals Emitted by Mediterranean Bottlenose Dolphins

    NASA Astrophysics Data System (ADS)

    Greco, Maria; Gini, Fulvio

    2006-12-01

    We analyzed the echolocation sounds emitted by Mediterranean bottlenose dolphins. We extracted the click trains by visual inspection of the data files recorded along the coast of the Tuscany with the collaboration of the CETUS Research Center. We modeled the extracted sonar clicks as Gaussian or exponential multicomponent signals, we estimated the characteristic parameters and compared the data with the reconstructed signals based on the estimates. Results about the estimation and the data fitting are largely shown in the paper.

  3. Well hydraulics in pumping tests with exponentially decayed rates of abstraction in confined aquifers

    NASA Astrophysics Data System (ADS)

    Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen

    2017-05-01

    Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.

  4. Improvements to Shortwave Absorption in the GFDL General Circulation Model Radiation Code

    NASA Astrophysics Data System (ADS)

    Freidenreich, S.

    2015-12-01

    The multiple-band shortwave radiation parameterization used in the GFDL general circulation models is being revised to better simulate the disposition of the solar flux in comparison with line-by-line+doubling-adding reference calculations based on the HITRAN 2012 catalog. For clear skies, a notable deficiency in the older formulation is an underestimate of atmospheric absorption. The two main reasons for this is the neglecting of both H2O absorption for wavenumbers < 2500 cm-1 and the O2 continuum. Further contributions to this underestimate are due to neglecting the effects of CH4, N2O and stratospheric H2O absorption. These issues are addressed in the revised formulation and result in globally average shortwave absorption increasing from 74 to 78 Wm-2. The number of spectral bands considered remains the same (18), but the number of pseudomonochromatic intervals (based mainly on the exponential-sum-fit technique) for the determination of H2O absorption is increased from 38 to 74, allowing for more accuracy in its simulation. Also, CO2 absorption is now determined by the exponential-sum-fit technique, replacing an algebraic absorptivity expression in the older parameterization; this improves the simulation of the heating in the stratosphere. Improvements to the treatment of multiple scattering are currently being tested. This involves replacing the current algorithm, which consists of the two stream delta-Eddington, with a four stream algorithm. Initial results show that in most, but not all cases these produce better agreement with the reference doubling-adding results.

  5. Alternative definition of excitation amplitudes in multi-reference state-specific coupled cluster

    NASA Astrophysics Data System (ADS)

    Garniron, Yann; Giner, Emmanuel; Malrieu, Jean-Paul; Scemama, Anthony

    2017-04-01

    A central difficulty of state-specific Multi-Reference Coupled Cluster (MR-CC) in the multi-exponential Jeziorski-Monkhorst formalism concerns the definition of the amplitudes of the single and double excitation operators appearing in the exponential wave operators. If the reference space is a complete active space (CAS), the number of these amplitudes is larger than the number of singly and doubly excited determinants on which one may project the eigenequation, and one must impose additional conditions. The present work first defines a state-specific reference-independent operator T˜ ^ m which acting on the CAS component of the wave function |Ψ0m⟩ maximizes the overlap between (1 +T˜ ^ m ) |Ψ0m⟩ and the eigenvector of the CAS-SD (Singles and Doubles) Configuration Interaction (CI) matrix |ΨCAS-SDm⟩ . This operator may be used to generate approximate coefficients of the triples and quadruples, and a dressing of the CAS-SD CI matrix, according to the intermediate Hamiltonian formalism. The process may be iterated to convergence. As a refinement towards a strict coupled cluster formalism, one may exploit reference-independent amplitudes provided by (1 +T˜ ^ m ) |Ψ0m⟩ to define a reference-dependent operator T^ m by fitting the eigenvector of the (dressed) CAS-SD CI matrix. The two variants, which are internally uncontracted, give rather similar results. The new MR-CC version has been tested on the ground state potential energy curves of 6 molecules (up to triple-bond breaking) and two excited states. The non-parallelism error with respect to the full-CI curves is of the order of 1 mEh.

  6. Complex Dynamic Development of Poliovirus Membranous Replication Complexes

    PubMed Central

    Nair, Vinod; Hansen, Bryan T.; Hoyt, Forrest H.; Fischer, Elizabeth R.; Ehrenfeld, Ellie

    2012-01-01

    Replication of all positive-strand RNA viruses is intimately associated with membranes. Here we utilize electron tomography and other methods to investigate the remodeling of membranes in poliovirus-infected cells. We found that the viral replication structures previously described as “vesicles” are in fact convoluted, branching chambers with complex and dynamic morphology. They are likely to originate from cis-Golgi membranes and are represented during the early stages of infection by single-walled connecting and branching tubular compartments. These early viral organelles gradually transform into double-membrane structures by extension of membranous walls and/or collapsing of the luminal cavity of the single-membrane structures. As the double-membrane regions develop, they enclose cytoplasmic material. At this stage, a continuous membranous structure may have double- and single-walled membrane morphology at adjacent cross-sections. In the late stages of the replication cycle, the structures are represented mostly by double-membrane vesicles. Viral replication proteins, double-stranded RNA species, and actively replicating RNA are associated with both double- and single-membrane structures. However, the exponential phase of viral RNA synthesis occurs when single-membrane formations are predominant in the cell. It has been shown previously that replication complexes of some other positive-strand RNA viruses form on membrane invaginations, which result from negative membrane curvature. Our data show that the remodeling of cellular membranes in poliovirus-infected cells produces structures with positive curvature of membranes. Thus, it is likely that there is a fundamental divergence in the requirements for the supporting cellular membrane-shaping machinery among different groups of positive-strand RNA viruses. PMID:22072780

  7. Third-order Douglas-Kroll Relativistic Coupled-Cluster Theory through Connected Single, Double, Triple, and Quadruple Substitutions: Applications to Diatomic and Triatomic Hydrides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, So; Yanai, Takeshi; De Jong, Wibe A.

    Coupled-cluster methods including through and up to the connected single, double, triple, and quadruple substitutions (CCSD, CCSDT, and CCSDTQ) have been automatically derived and implemented for sequential and parallel executions for use in conjunction with a one-component third-order Douglas-Kroll (DK3) approximation for relativistic corrections. A combination of the converging electron-correlation methods, the accurate relativistic reference wave functions, and the use of systematic basis sets tailored to the relativistic approximation has been shown to predict the experimental singlet-triplet separations within 0.02 eV (0.5 kcal/mol) for five triatomic hydrides (CH2, NH2+, SiH2, PH2+, and AsH2+), the experimental bond lengths within 0.002 angstroms,more » rotational constants within 0.02 cm-1, vibration-rotation constants within 0.01 cm-1, centrifugal distortion constants within 2 %, harmonic vibration frequencies within 9 cm-1 (0.4 %), anharmonic vibrational constants within 2 cm-1, and dissociation energies within 0.03 eV (0.8 kcal/mol) for twenty diatomic hydrides (BH, CH, NH, OH, FH, AlH, SiH, PH, SH, ClH, GaH, GeH, AsH, SeH, BrH, InH, SnH, SbH, TeH, and IH) containing main-group elements across the second through fifth periods of the periodic table. In these calculations, spin-orbit effects on dissociation energies, which were assumed to be additive, were estimated from the measured spin-orbit coupling constants of atoms and diatomic molecules, and an electronic energy in the complete-basis-set, complete-electron-correlation limit has been extrapolated by the formula which was in turn based on the exponential-Gaussian extrapolation formula of the basis set dependence.« less

  8. Virtual Observatory and Distributed Data Mining

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.

    2012-03-01

    New modes of discovery are enabled by the growth of data and computational resources (i.e., cyberinfrastructure) in the sciences. This cyberinfrastructure includes structured databases, virtual observatories (distributed data, as described in Section 20.2.1 of this chapter), high-performance computing (petascale machines), distributed computing (e.g., the Grid, the Cloud, and peer-to-peer networks), intelligent search and discovery tools, and innovative visualization environments. Data streams from experiments, sensors, and simulations are increasingly complex and growing in volume. This is true in most sciences, including astronomy, climate simulations, Earth observing systems, remote sensing data collections, and sensor networks. At the same time, we see an emerging confluence of new technologies and approaches to science, most clearly visible in the growing synergism of the four modes of scientific discovery: sensors-modeling-computing-data (Eastman et al. 2005). This has been driven by numerous developments, including the information explosion, development of large-array sensors, acceleration in high-performance computing (HPC) power, advances in algorithms, and efficient modeling techniques. Among these, the most extreme is the growth in new data. Specifically, the acquisition of data in all scientific disciplines is rapidly accelerating and causing a data glut (Bell et al. 2007). It has been estimated that data volumes double every year—for example, the NCSA (National Center for Supercomputing Applications) reported that their users cumulatively generated one petabyte of data over the first 19 years of NCSA operation, but they then generated their next one petabyte in the next year alone, and the data production has been growing by almost 100% each year after that (Butler 2008). The NCSA example is just one of many demonstrations of the exponential (annual data-doubling) growth in scientific data collections. In general, this putative data-doubling is an inevitable result of several compounding factors: the proliferation of data-generating devices, sensors, projects, and enterprises; the 18-month doubling of the digital capacity of these microprocessor-based sensors and devices (commonly referred to as "Moore’s law"); the move to digital for nearly all forms of information; the increase in human-generated data (both unstructured information on the web and structured data from experiments, models, and simulation); and the ever-expanding capability of higher density media to hold greater volumes of data (i.e., data production expands to fill the available storage space). These factors are consequently producing an exponential data growth rate, which will soon (if not already) become an insurmountable technical challenge even with the great advances in computation and algorithms. This technical challenge is compounded by the ever-increasing geographic dispersion of important data sources—the data collections are not stored uniformly at a single location, or with a single data model, or in uniform formats and modalities (e.g., images, databases, structured and unstructured files, and XML data sets)—the data are in fact large, distributed, heterogeneous, and complex. The greatest scientific research challenge with these massive distributed data collections is consequently extracting all of the rich information and knowledge content contained therein, thus requiring new approaches to scientific research. This emerging data-intensive and data-oriented approach to scientific research is sometimes called discovery informatics or X-informatics (where X can be any science, such as bio, geo, astro, chem, eco, or anything; Agresti 2003; Gray 2003; Borne 2010). This data-oriented approach to science is now recognized by some (e.g., Mahootian and Eastman 2009; Hey et al. 2009) as the fourth paradigm of research, following (historically) experiment/observation, modeling/analysis, and computational science.

  9. Nonexponential Decoherence and Subdiffusion in Atom-Optics Kicked Rotor.

    PubMed

    Sarkar, Sumit; Paul, Sanku; Vishwakarma, Chetan; Kumar, Sunil; Verma, Gunjan; Sainath, M; Rapol, Umakant D; Santhanam, M S

    2017-04-28

    Quantum systems lose coherence upon interaction with the environment and tend towards classical states. Quantum coherence is known to exponentially decay in time so that macroscopic quantum superpositions are generally unsustainable. In this work, slower than exponential decay of coherences is experimentally realized in an atom-optics kicked rotor system subjected to nonstationary Lévy noise in the applied kick sequence. The slower coherence decay manifests in the form of quantum subdiffusion that can be controlled through the Lévy exponent. The experimental results are in good agreement with the analytical estimates and numerical simulations for the mean energy growth and momentum profiles of an atom-optics kicked rotor.

  10. Exponential Models of Legislative Turnover. [and] The Dynamics of Political Mobilization, I: A Model of the Mobilization Process, II: Deductive Consequences and Empirical Application of the Model. Applications of Calculus to American Politics. [and] Public Support for Presidents. Applications of Algebra to American Politics. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Units 296-300.

    ERIC Educational Resources Information Center

    Casstevens, Thomas W.; And Others

    This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…

  11. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  12. Assessment of online monitoring strategies for measuring N2O emissions from full-scale wastewater treatment systems.

    PubMed

    Marques, Ricardo; Rodriguez-Caballero, A; Oehmen, Adrian; Pijuan, Maite

    2016-08-01

    Clark-Type nitrous oxide (N2O) sensors are routinely used to measure dissolved N2O concentrations in wastewater treatment plants (WWTPs), but have never before been applied to assess gas-phase N2O emissions in full-scale WWTPs. In this study, a full-scale N2O gas sensor was tested and validated for online gas measurements, and assessed with respect to its linearity, temperature dependence, signal saturation and drift prior to full-scale application. The sensor was linear at the concentrations tested (0-422.3, 0-50 and 0-10 ppmv N2O) and had a linear response up to 2750 ppmv N2O. An exponential correlation between temperature and sensor signal was described and predicted using a double exponential equation while the drift did not have a significant influence on the signal. The N2O gas sensor was used for online N2O monitoring in a full-scale sequencing batch reactor (SBR) treating domestic wastewater and results were compared with those obtained by a commercial online gas analyser. Emissions were successfully described by the sensor, being even more accurate than the values given by the commercial analyser at N2O concentrations above 500 ppmv. Data from this gas N2O sensor was also used to validate two models to predict N2O emissions from dissolved N2O measurements, one based on oxygen transfer rate and the other based on superficial velocity of the gas bubble. Using the first model, predictions for N2O emissions agreed by 98.7% with the measured by the gas sensor, while 87.0% similarity was obtained with the second model. This is the first study showing a reliable estimation of gas emissions based on dissolved N2O online data in a full-scale wastewater treatment facility. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Statistical properties of mean stand biomass estimators in a LIDAR-based double sampling forest survey design.

    Treesearch

    H.E. Anderson; J. Breidenbach

    2007-01-01

    Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...

  14. Spread Spectrum Signal Characteristic Estimation Using Exponential Averaging and an AD-HOC Chip rate Estimator

    DTIC Science & Technology

    2007-03-01

    Quadrature QPSK Quadrature Phase-Shift Keying RV Random Variable SHAC Single-Hop-Observation Auto- Correlation SINR Signal-to-Interference...The fast Fourier transform ( FFT ) accumulation method and the strip spectral correlation algorithm subdivide the support region in the bi-frequency...diamond shapes, while the strip spectral correlation algorithm subdivides the region into strips. Each strip covers a number of the FFT accumulation

  15. Estimates of internal-dose equivalent from inhalation and ingestion of selected radionuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunning, D.E.

    1982-01-01

    This report presents internal radiation dose conversion factors for radionuclides of interest in environmental assessments of nuclear fuel cycles. This volume provides an updated summary of estimates of committed dose equivalent for radionuclides considered in three previous Oak Ridge National Laboratory (ORNL) reports. Intakes by inhalation and ingestion are considered. The International Commission on Radiological Protection (ICRP) Task Group Lung Model has been used to simulate the deposition and retention of particulate matter in the respiratory tract. Results corresponding to activity median aerodynamic diameters (AMAD) of 0.3, 1.0, and 5.0 ..mu..m are given. The gastorintestinal (GI) tract has been representedmore » by a four-segment catenary model with exponential transfer of radioactivity from one segment to the next. Retention of radionuclides in systemic organs is characterized by linear combinations of decaying exponential functions, recommended in ICRP Publication 30. The first-year annual dose rate, maximum annual dose rate, and fifty-year dose commitment per microcurie intake of each radionuclide is given for selected target organs and the effective dose equivalent. These estimates include contributions from specified source organs plus the systemic activity residing in the rest of the body; cross irradiation due to penetrating radiations has been incorporated into these estimates. 15 references.« less

  16. Structured pedigree information for distributed fusion systems

    NASA Astrophysics Data System (ADS)

    Arambel, Pablo O.

    2008-04-01

    One of the most critical challenges in distributed data fusion is the avoidance of information double counting (also called "data incest" or "rumor propagation"). This occurs when a node in a network incorporates information into an estimate - e.g. the position of an object - and the estimate is injected into the network. Other nodes fuse this estimate with their own estimates, and continue to propagate estimates through the network. When the first node receives a fused estimate from the network, it does not know if it already contains its own contributions or not. Since the correlation between its own estimate and the estimate received from the network is not known, the node can not fuse the estimates in an optimal way. If it assumes that both estimates are independent from each other, it unknowingly double counts the information that has already being used to obtain the two estimates. This leads to overoptimistic error covariance matrices. If the double-counting is not kept under control, it may lead to serious performance degradation. Double counting can be avoided by propagating uniquely tagged raw measurements; however, that forces each node to process all the measurements and precludes the propagation of derived information. Another approach is to fuse the information using the Covariance Intersection (CI) equations, which maintain consistent estimates irrespective of the cross-correlation among estimates. However, CI does not exploit pedigree information of any kind. In this paper we present an approach that propagates multiple covariance matrices, one for each uncorrelated source in the network. This is a way to compress the pedigree information and avoids the need to propagate raw measurements. The approach uses a generalized version of the Split CI to fuse different estimates with appropriate weights to guarantee the consistency of the estimates.

  17. Estimating the decline in excess risk of chronic obstructive pulmonary disease following quitting smoking - a systematic review based on the negative exponential model.

    PubMed

    Lee, Peter N; Fry, John S; Forey, Barbara A

    2014-03-01

    We quantified the decline in COPD risk following quitting using the negative exponential model, as previously carried out for other smoking-related diseases. We identified 14 blocks of RRs (from 11 studies) comparing current smokers, former smokers (by time quit) and never smokers, some studies providing sex-specific blocks. Corresponding pseudo-numbers of cases and controls/at risk formed the data for model-fitting. We estimated the half-life (H, time since quit when the excess risk becomes half that for a continuing smoker) for each block, except for one where no decline with quitting was evident, and H was not estimable. For the remaining 13 blocks, goodness-of-fit to the model was generally adequate, the combined estimate of H being 13.32 (95% CI 11.86-14.96) years. There was no heterogeneity in H, overall or by various studied sources. Sensitivity analyses allowing for reverse causation or different assumed times for the final quitting period little affected the results. The model summarizes quitting data well. The estimate of 13.32years is substantially larger than recent estimates of 4.40years for ischaemic heart disease and 4.78years for stroke, and also larger than the 9.93years for lung cancer. Heterogeneity was unimportant for COPD, unlike for the other three diseases. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Double sampling to estimate density and population trends in birds

    USGS Publications Warehouse

    Bart, Jonathan; Earnst, Susan L.

    2002-01-01

    We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.

  19. Translational resistivity/conductivity of coding sequences during exponential growth of Escherichia coli.

    PubMed

    Takai, Kazuyuki

    2017-01-21

    Codon adaptation index (CAI) has been widely used for prediction of expression of recombinant genes in Escherichia coli and other organisms. However, CAI has no mechanistic basis that rationalizes its application to estimation of translational efficiency. Here, I propose a model based on which we could consider how codon usage is related to the level of expression during exponential growth of bacteria. In this model, translation of a gene is considered as an analog of electric current, and an analog of electric resistance corresponding to each gene is considered. "Translational resistance" is dependent on the steady-state concentration and the sequence of the mRNA species, and "translational resistivity" is dependent only on the mRNA sequence. The latter is the sum of two parts: one is the resistivity for the elongation reaction (coding sequence resistivity), and the other comes from all of the other steps of the decoding reaction. This electric circuit model clearly shows that some conditions should be met for codon composition of a coding sequence to correlate well with its expression level. On the other hand, I calculated relative frequency of each of the 61 sense codon triplets translated during exponential growth of E. coli from a proteomic dataset covering over 2600 proteins. A tentative method for estimating relative coding sequence resistivity based on the data is presented. Copyright © 2016. Published by Elsevier Ltd.

  20. Anomalous T2 relaxation in normal and degraded cartilage.

    PubMed

    Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G

    2016-09-01

    To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  1. Comparison of least squares and exponential sine sweep methods for Parallel Hammerstein Models estimation

    NASA Astrophysics Data System (ADS)

    Rebillat, Marc; Schoukens, Maarten

    2018-05-01

    Linearity is a common assumption for many real-life systems, but in many cases the nonlinear behavior of systems cannot be ignored and must be modeled and estimated. Among the various existing classes of nonlinear models, Parallel Hammerstein Models (PHM) are interesting as they are at the same time easy to interpret as well as to estimate. One way to estimate PHM relies on the fact that the estimation problem is linear in the parameters and thus that classical least squares (LS) estimation algorithms can be used. In that area, this article introduces a regularized LS estimation algorithm inspired on some of the recently developed regularized impulse response estimation techniques. Another mean to estimate PHM consists in using parametric or non-parametric exponential sine sweeps (ESS) based methods. These methods (LS and ESS) are founded on radically different mathematical backgrounds but are expected to tackle the same issue. A methodology is proposed here to compare them with respect to (i) their accuracy, (ii) their computational cost, and (iii) their robustness to noise. Tests are performed on simulated systems for several values of methods respective parameters and of signal to noise ratio. Results show that, for a given set of data points, the ESS method is less demanding in computational resources than the LS method but that it is also less accurate. Furthermore, the LS method needs parameters to be set in advance whereas the ESS method is not subject to conditioning issues and can be fully non-parametric. In summary, for a given set of data points, ESS method can provide a first, automatic, and quick overview of a nonlinear system than can guide more computationally demanding and precise methods, such as the regularized LS one proposed here.

  2. Correlation between the change in the kinetics of the ribosomal RNA rrnB P2 promoter and the transition from lag to exponential phase with Pseudomonas fluorescens.

    PubMed

    McKellar, Robin C

    2008-01-15

    Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P

  3. Analytical model of coincidence resolving time in TOF-PET

    NASA Astrophysics Data System (ADS)

    Wieczorek, H.; Thon, A.; Dey, T.; Khanin, V.; Rodnyi, P.

    2016-06-01

    The coincidence resolving time (CRT) of scintillation detectors is the parameter determining noise reduction in time-of-flight PET. We derive an analytical CRT model based on the statistical distribution of photons for two different prototype scintillators. For the first one, characterized by single exponential decay, CRT is proportional to the decay time and inversely proportional to the number of photons, with a square root dependence on the trigger level. For the second scintillator prototype, characterized by exponential rise and decay, CRT is proportional to the square root of the product of rise time and decay time divided by the doubled number of photons, and it is nearly independent of the trigger level. This theory is verified by measurements of scintillation time constants, light yield and CRT on scintillator sticks. Trapping effects are taken into account by defining an effective decay time. We show that in terms of signal-to-noise ratio, CRT is as important as patient dose, imaging time or PET system sensitivity. The noise reduction effect of better timing resolution is verified and visualized by Monte Carlo simulation of a NEMA image quality phantom.

  4. Constant growth rate can be supported by decreasing energy flux and increasing aerobic glycolysis.

    PubMed

    Slavov, Nikolai; Budnik, Bogdan A; Schwab, David; Airoldi, Edoardo M; van Oudenaarden, Alexander

    2014-05-08

    Fermenting glucose in the presence of enough oxygen to support respiration, known as aerobic glycolysis, is believed to maximize growth rate. We observed increasing aerobic glycolysis during exponential growth, suggesting additional physiological roles for aerobic glycolysis. We investigated such roles in yeast batch cultures by quantifying O2 consumption, CO2 production, amino acids, mRNAs, proteins, posttranslational modifications, and stress sensitivity in the course of nine doublings at constant rate. During this course, the cells support a constant biomass-production rate with decreasing rates of respiration and ATP production but also decrease their stress resistance. As the respiration rate decreases, so do the levels of enzymes catalyzing rate-determining reactions of the tricarboxylic-acid cycle (providing NADH for respiration) and of mitochondrial folate-mediated NADPH production (required for oxidative defense). The findings demonstrate that exponential growth can represent not a single metabolic/physiological state but a continuum of changing states and that aerobic glycolysis can reduce the energy demands associated with respiratory metabolism and stress survival. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Finite-time containment control of perturbed multi-agent systems based on sliding-mode control

    NASA Astrophysics Data System (ADS)

    Yu, Di; Ji, Xiang Yang

    2018-01-01

    Aimed at faster convergence rate, this paper investigates finite-time containment control problem for second-order multi-agent systems with norm-bounded non-linear perturbation. When topology between the followers are strongly connected, the nonsingular fast terminal sliding-mode error is defined, corresponding discontinuous control protocol is designed and the appropriate value range of control parameter is obtained by applying finite-time stability analysis, so that the followers converge to and move along the desired trajectories within the convex hull formed by the leaders in finite time. Furthermore, on the basis of the sliding-mode error defined, the corresponding distributed continuous control protocols are investigated with fast exponential reaching law and double exponential reaching law, so as to make the followers move to the small neighbourhoods of their desired locations and keep within the dynamic convex hull formed by the leaders in finite time to achieve practical finite-time containment control. Meanwhile, we develop the faster control scheme according to comparison of the convergence rate of these two different reaching laws. Simulation examples are given to verify the correctness of theoretical results.

  6. Picosecond absorption anisotropy of polymethine and squarylium dyes in liquid and polymeric media

    NASA Astrophysics Data System (ADS)

    Przhonska, Olga V.; Hagan, David J.; Novikov, Evgueni; Lepkowicz, Richard; Van Stryland, Eric W.; Bondar, Mikhail V.; Slominsky, Yuriy L.; Kachkovski, Alexei D.

    2001-11-01

    Time-resolved excitation-probe polarization measurements are performed for polymethine and squarylium dyes in ethanol and an elastopolymer of polyurethane acrylate (PUA). These molecules exhibit strong excited-state absorption in the visible, which results in reverse saturable absorption (RSA). In pump-probe experiments, we observe a strong angular dependence of the RSA decay kinetics upon variation of the angle between pump and probe polarizations. The difference in absorption anisotropy kinetics in ethanol and PUA is detected and analyzed. Anisotropy decay curves in ethanol follow a single exponential decay leading to complete depolarization of the excited state. We also observe complete depolarization in PUA, in which case the anisotropy decay follows a double exponential behavior. Possible rotations in the PUA polymeric matrix are connected with the existence of local microcavities of free volume. We believe that the fast decay component is connected with the rotation of molecular fragments and the slower decay component is connected with the rotation of entire molecules in local microcavities, which is possible because of the elasticity of the polymeric material.

  7. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.

  8. Doubly differential cross sections for galactic heavy-ion fragmentation

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Norbury, John W.; Khandelwal, Govind S.; Townsend, Lawrence W.

    1987-01-01

    An abrasion-ablation T-matrix formulation is applied to the calculation of double differential-cross sections in projectile fragmentation of 2.1 GeV/nucleon O-16 on Be-9 and 86 MeV/nucleon C-12 on C-12 and Ag-108. An exponential parameterization of the ablation T-matrix is used and the total width of the intermediate states is taken as a parameter. Fitted values of the total width to experimental results are used to predict the lifetime of the ablation stage and indicate a decay time on the order of 10 to the -19th power sec.

  9. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  10. Double Your Major, Double Your Return?

    ERIC Educational Resources Information Center

    Del Rossi, Alison F.; Hersch, Joni

    2008-01-01

    We use the 2003 National Survey of College Graduates to provide the first estimates of the effect on earnings of having a double major. Overall, double majoring increases earnings by 2.3% relative to having a single major among college graduates without graduate degrees. Most of the gains from having a double major come from choosing fields across…

  11. The discrete Laplace exponential family and estimation of Y-STR haplotype frequencies.

    PubMed

    Andersen, Mikkel Meyer; Eriksen, Poul Svante; Morling, Niels

    2013-07-21

    Estimating haplotype frequencies is important in e.g. forensic genetics, where the frequencies are needed to calculate the likelihood ratio for the evidential weight of a DNA profile found at a crime scene. Estimation is naturally based on a population model, motivating the investigation of the Fisher-Wright model of evolution for haploid lineage DNA markers. An exponential family (a class of probability distributions that is well understood in probability theory such that inference is easily made by using existing software) called the 'discrete Laplace distribution' is described. We illustrate how well the discrete Laplace distribution approximates a more complicated distribution that arises by investigating the well-known population genetic Fisher-Wright model of evolution by a single-step mutation process. It was shown how the discrete Laplace distribution can be used to estimate haplotype frequencies for haploid lineage DNA markers (such as Y-chromosomal short tandem repeats), which in turn can be used to assess the evidential weight of a DNA profile found at a crime scene. This was done by making inference in a mixture of multivariate, marginally independent, discrete Laplace distributions using the EM algorithm to estimate the probabilities of membership of a set of unobserved subpopulations. The discrete Laplace distribution can be used to estimate haplotype frequencies with lower prediction error than other existing estimators. Furthermore, the calculations could be performed on a normal computer. This method was implemented in the freely available open source software R that is supported on Linux, MacOS and MS Windows. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Double-observer line transect surveys with Markov-modulated Poisson process models for animal availability.

    PubMed

    Borchers, D L; Langrock, R

    2015-12-01

    We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.

  13. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.

    PubMed

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2013-10-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.

  14. Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).

    PubMed

    Namiki, C; Katsuragawa, M; Zani-Teixeira, M L

    2015-04-01

    The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.

  15. Exponential evolution: implications for intelligent extraterrestrial life.

    PubMed

    Russell, D A

    1983-01-01

    Some measures of biologic complexity, including maximal levels of brain development, are exponential functions of time through intervals of 10(6) to 10(9) yrs. Biological interactions apparently stimulate evolution but physical conditions determine the time required to achieve a given level of complexity. Trends in brain evolution suggest that other organisms could attain human levels within approximately 10(7) yrs. The number (N) and longevity (L) terms in appropriate modifications of the Drake Equation, together with trends in the evolution of biological complexity on Earth, could provide rough estimates of the prevalence of life forms at specified levels of complexity within the Galaxy. If life occurs throughout the cosmos, exponential evolutionary processes imply that higher intelligence will soon (10(9) yrs) become more prevalent than it now is. Changes in the physical universe become less rapid as time increases from the Big Bang. Changes in biological complexity may be most rapid at such later times. This lends a unique and symmetrical importance to early and late universal times.

  16. Quasiclassical treatment of the Auger effect in slow ion-atom collisions

    NASA Astrophysics Data System (ADS)

    Frémont, F.

    2017-09-01

    A quasiclassical model based on the resolution of Hamilton equations of motion is used to get evidence for Auger electron emission following double-electron capture in 150-keV N e10 ++He collisions. Electron-electron interaction is taken into account during the collision by using pure Coulombic potential. To make sure that the helium target is stable before the collision, phenomenological potentials for the electron-nucleus interactions that simulate the Heisenberg principle are included in addition to the Coulombic potential. First, single- and double-electron captures are determined and compared with previous experiments and theories. Then, integration time evolution is calculated for autoionizing and nonautoionizing double capture. In contrast with single capture, the number of electrons originating from autoionization slowly increases with integration time. A fit of the calculated cross sections by means of an exponential function indicates that the average lifetime is 4.4 ×10-3a .u . , in very good agreement with the average lifetime deduced from experiments and a classical model introduced to calculate individual angular momentum distributions. The present calculation demonstrates the ability of classical models to treat the Auger effect, which is a pure quantum effect.

  17. Polynomial Similarity Transformation Theory: A smooth interpolation between coupled cluster doubles and projected BCS applied to the reduced BCS Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degroote, M.; Henderson, T. M.; Zhao, J.

    We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less

  18. Laser-induced periodic surface structures on 6H-SiC single crystals using temporally delayed femtosecond laser double-pulse trains

    NASA Astrophysics Data System (ADS)

    Song, Juan; Tao, Wenjun; Song, Hui; Gong, Min; Ma, Guohong; Dai, Ye; Zhao, Quanzhong; Qiu, Jianrong

    2016-04-01

    In this paper, a time-delay-adjustable double-pulse train with 800-nm wavelength, 200-fs pulse duration and a repetition rate of 1 kHz, produced by a collinear two-beam optical system like a Mach-Zehnder interferometer, was employed for irradiation of 6H-SiC crystal. The dependence of the induced structures on time delay of double-pulse train for parallel-polarization configuration was studied. The results show that as the time delay of collinear parallel-polarization dual-pulse train increased, the induced near-subwavelength ripples (NSWRs) turn from irregular rippled pattern to regularly periodic pattern and have their grooves much deepened. The characteristics timescale for this transition is about 6.24 ps. Besides, the areas of NSWR were found to decay exponentially for time delay from 0 to 1.24 ps and then slowly increase for time delay from 1.24 to 14.24 ps. Analysis shows that multiphoton ionization effect, grating-assisted surface plasmon coupling effect, and timely intervene of second pulse in a certain physical stage experienced by 6H-SiC excited upon first pulse irradiation may contribute to the transition of morphology details.

  19. Generating functions for weighted Hurwitz numbers

    NASA Astrophysics Data System (ADS)

    Guay-Paquet, Mathieu; Harnad, J.

    2017-08-01

    Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.

  20. Transient modeling in simulation of hospital operations for emergency response.

    PubMed

    Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li

    2006-01-01

    Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.

  1. Prevalence of Dermanyssus gallinae (Mesostigmata: Dermanyssidae) in industrial poultry farms in North-East Tunisia

    PubMed Central

    Gharbi, Mohamed; Sakly, Nadhem; Darghouth, Mohamed Aziz

    2013-01-01

    Dermanyssus gallinae (Mesostigmata: Dermanyssidae), a mite of poultry, represents the most important ecotoparasite of egg-laying poultry in several countries. We estimated the prevalence of D. gallinae infestation in 38 industrial poultry farms (28 egg-laying and 10 reproductive hen farms) in the governorate of Nabeul (North-East Tunisia). Traps were placed in two locations of each farm during 24 h in August. The overall prevalence at the farms was estimated to be 34%. A total number of 329 D. gallinae were collected, giving an intensity of 0.0028 and an abundance of 0.0015. Infestation intensity and abundance were significantly higher in egg production farms than reproductive farms. There was no correlation between the intensity of infestation and temperature. An exponential correlation was observed between the birds’ age and infestation intensity. We recommend a systematic survey of poultry farms during the whole breeding period. Prompt treatment is recommended to avoid the exponential increase of mite population. PMID:24160169

  2. Prevalence of Dermanyssus gallinae (Mesostigmata: Dermanyssidae) in industrial poultry farms in North-East Tunisia.

    PubMed

    Gharbi, Mohamed; Sakly, Nadhem; Darghouth, Mohamed Aziz

    2013-01-01

    Dermanyssus gallinae (Mesostigmata: Dermanyssidae), a mite of poultry, represents the most important ecotoparasite of egg-laying poultry in several countries. We estimated the prevalence of D. gallinae infestation in 38 industrial poultry farms (28 egg-laying and 10 reproductive hen farms) in the governorate of Nabeul (North-East Tunisia). Traps were placed in two locations of each farm during 24 h in August. The overall prevalence at the farms was estimated to be 34%. A total number of 329 D. gallinae were collected, giving an intensity of 0.0028 and an abundance of 0.0015. Infestation intensity and abundance were significantly higher in egg production farms than reproductive farms. There was no correlation between the intensity of infestation and temperature. An exponential correlation was observed between the birds' age and infestation intensity. We recommend a systematic survey of poultry farms during the whole breeding period. Prompt treatment is recommended to avoid the exponential increase of mite population. © M. Gharbi et al., published by EDP Sciences, 2013.

  3. Patient- and cohort-specific dose and risk estimation for abdominopelvic CT: a study based on 100 patients

    NASA Astrophysics Data System (ADS)

    Tian, Xiaoyu; Li, Xiang; Segars, W. Paul; Frush, Donald P.; Samei, Ehsan

    2012-03-01

    The purpose of this work was twofold: (a) to estimate patient- and cohort-specific radiation dose and cancer risk index for abdominopelvic computer tomography (CT) scans; (b) to evaluate the effects of patient anatomical characteristics (size, age, and gender) and CT scanner model on dose and risk conversion coefficients. The study included 100 patient models (42 pediatric models, 58 adult models) and multi-detector array CT scanners from two commercial manufacturers (LightSpeed VCT, GE Healthcare; SOMATOM Definition Flash, Siemens Healthcare). A previously-validated Monte Carlo program was used to simulate organ dose for each patient model and each scanner, from which DLP-normalized-effective dose (k factor) and DLP-normalized-risk index values (q factor) were derived. The k factor showed exponential decrease with increasing patient size. For a given gender, q factor showed exponential decrease with both increasing patient size and patient age. The discrepancies in k and q factors across scanners were on average 8% and 15%, respectively. This study demonstrates the feasibility of estimating patient-specific organ dose and cohort-specific effective dose and risk index in abdominopelvic CT requiring only the knowledge of patient size, gender, and age.

  4. Estimation of the Ratio of Scale Parameters in the Two Sample Problem with Arbitrary Right Censorship.

    DTIC Science & Technology

    1980-06-01

    70. AWST RC 7 Coeittu an rewwase ati of nee*aa.ean mimDdentify by black n,.mboJ T two-sample version of the Cram~ r -von Mines statistic for right...estimator for exponential distributions. KEY WORDS: Cram~ r -von Mtses distance; Kaplan-Meier estimators; Right censorship; Scale parameter; lodgea and...suppose that two positive random variables ’i 2 S0 and ’ r differ in distribution only by their scale parameters. That is, there exists a positive

  5. Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin

    2018-04-01

    Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.

  6. Global Perceived Stress Predicts Cognitive Change among Older Adults

    PubMed Central

    Munoz, Elizabeth; Sliwinski, Martin J.; Scott, Stacey B.; Hofer, Scott

    2015-01-01

    Research on stress and cognitive aging has primarily focused on examining the effects of biological and psychosocial indicators of stress with little attention provided to examining the association between perceived stress and cognitive aging. We examined the longitudinal association between global perceived stress (GPS) and cognitive change among 116 older adults (Mage = 80, SD = 6.40, range: 67–96) in a repeated measurement burst design. Bursts of six daily cognitive assessments were repeated every six months over a two-year period with self-reported GPS assessed at the start of every burst. Using a double-exponential learning model, two parameters were estimated: 1) asymptotic level (peak performance), and 2) asymptotic change (the rate in which peak performance changed across bursts). We hypothesized that greater GPS would predict slowed performance in tasks of attention, working memory, and speed of processing and that increases in GPS across time would predict cognitive slowing. Results from latent growth curve analyses were consistent with our first hypothesis and indicated that level of GPS predicted cognitive slowing across time. Changes in GPS did not predict cognitive slowing. This study extends previous findings by demonstrating a prospective association between level of GPS and cognitive slowing across a two-year period highlighting the role of psychological stress as a risk factor for poor cognitive function. PMID:26121285

  7. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  8. Reliable Viscosity Calculation from Equilibrium Molecular Dynamics Simulations: A Time Decomposition Method.

    PubMed

    Zhang, Yong; Otani, Akihito; Maginn, Edward J

    2015-08-11

    Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.

  9. A statistical approach for generating synthetic tip stress data from limited CPT soundings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basalams, M.K.

    CPT tip stress data obtained from a Uranium mill tailings impoundment are treated as time series. A statistical class of models that was developed to model time series is explored to investigate its applicability in modeling the tip stress series. These models were developed by Box and Jenkins (1970) and are known as Autoregressive Moving Average (ARMA) models. This research demonstrates how to apply the ARMA models to tip stress series. Generation of synthetic tip stress series that preserve the main statistical characteristics of the measured series is also investigated. Multiple regression analysis is used to model the regional variationmore » of the ARMA model parameters as well as the regional variation of the mean and the standard deviation of the measured tip stress series. The reliability of the generated series is investigated from a geotechnical point of view as well as from a statistical point of view. Estimation of the total settlement using the measured and the generated series subjected to the same loading condition are performed. The variation of friction angle with depth of the impoundment materials is also investigated. This research shows that these series can be modeled by the Box and Jenkins ARMA models. A third degree Autoregressive model AR(3) is selected to represent these series. A theoretical double exponential density function is fitted to the AR(3) model residuals. Synthetic tip stress series are generated at nearby locations. The generated series are shown to be reliable in estimating the total settlement and the friction angle variation with depth for this particular site.« less

  10. CO2 flux determination by closed-chamber methods can be seriously biased by inappropriate application of linear regression

    NASA Astrophysics Data System (ADS)

    Kutzbach, L.; Schneider, J.; Sachs, T.; Giebels, M.; Nykänen, H.; Shurpali, N. J.; Martikainen, P. J.; Alm, J.; Wilmking, M.

    2007-07-01

    Closed (non-steady state) chambers are widely used for quantifying carbon dioxide (CO2) fluxes between soils or low-stature canopies and the atmosphere. It is well recognised that covering a soil or vegetation by a closed chamber inherently disturbs the natural CO2 fluxes by altering the concentration gradients between the soil, the vegetation and the overlying air. Thus, the driving factors of CO2 fluxes are not constant during the closed chamber experiment, and no linear increase or decrease of CO2 concentration over time within the chamber headspace can be expected. Nevertheless, linear regression has been applied for calculating CO2 fluxes in many recent, partly influential, studies. This approach was justified by keeping the closure time short and assuming the concentration change over time to be in the linear range. Here, we test if the application of linear regression is really appropriate for estimating CO2 fluxes using closed chambers over short closure times and if the application of nonlinear regression is necessary. We developed a nonlinear exponential regression model from diffusion and photosynthesis theory. This exponential model was tested with four different datasets of CO2 flux measurements (total number: 1764) conducted at three peatland sites in Finland and a tundra site in Siberia. The flux measurements were performed using transparent chambers on vegetated surfaces and opaque chambers on bare peat surfaces. Thorough analyses of residuals demonstrated that linear regression was frequently not appropriate for the determination of CO2 fluxes by closed-chamber methods, even if closure times were kept short. The developed exponential model was well suited for nonlinear regression of the concentration over time c(t) evolution in the chamber headspace and estimation of the initial CO2 fluxes at closure time for the majority of experiments. CO2 flux estimates by linear regression can be as low as 40% of the flux estimates of exponential regression for closure times of only two minutes and even lower for longer closure times. The degree of underestimation increased with increasing CO2 flux strength and is dependent on soil and vegetation conditions which can disturb not only the quantitative but also the qualitative evaluation of CO2 flux dynamics. The underestimation effect by linear regression was observed to be different for CO2 uptake and release situations which can lead to stronger bias in the daily, seasonal and annual CO2 balances than in the individual fluxes. To avoid serious bias of CO2 flux estimates based on closed chamber experiments, we suggest further tests using published datasets and recommend the use of nonlinear regression models for future closed chamber studies.

  11. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  12. Ultrascale Visualization of Climate Data

    NASA Technical Reports Server (NTRS)

    Williams, Dean N.; Bremer, Timo; Doutriaux, Charles; Patchett, John; Williams, Sean; Shipman, Galen; Miller, Ross; Pugmire, David R.; Smith, Brian; Steed, Chad; hide

    2013-01-01

    Fueled by exponential increases in the computational and storage capabilities of high-performance computing platforms, climate simulations are evolving toward higher numerical fidelity, complexity, volume, and dimensionality. These technological breakthroughs are coming at a time of exponential growth in climate data, with estimates of hundreds of exabytes by 2020. To meet the challenges and exploit the opportunities that such explosive growth affords, a consortium of four national laboratories, two universities, a government agency, and two private companies formed to explore the next wave in climate science. Working in close collaboration with domain experts, the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) project aims to provide high-level solutions to a variety of climate data analysis and visualization problems.

  13. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  14. Broadband Spectral Modeling of the Extreme Gigahertz-peaked Spectrum Radio Source PKS B0008-421

    NASA Astrophysics Data System (ADS)

    Callingham, J. R.; Gaensler, B. M.; Ekers, R. D.; Tingay, S. J.; Wayth, R. B.; Morgan, J.; Bernardi, G.; Bell, M. E.; Bhat, R.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Ewall-Wice, A.; Feng, L.; Greenhill, L. J.; Hazelton, B. J.; Hindson, L.; Hurley-Walker, N.; Jacobs, D. C.; Johnston-Hollitt, M.; Kaplan, D. L.; Kudrayvtseva, N.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Srivani, K. S.; Subrahmanyan, R.; Udaya Shankar, N.; Webster, R. L.; Williams, A.; Williams, C. L.

    2015-08-01

    We present broadband observations and spectral modeling of PKS B0008-421 and identify it as an extreme gigahertz-peaked spectrum (GPS) source. PKS B0008-421 is characterized by the steepest known spectral slope below the turnover, close to the theoretical limit of synchrotron self-absorption, and the smallest known spectral width of any GPS source. Spectral coverage of the source spans from 0.118 to 22 GHz, which includes data from the Murchison Widefield Array and the wide bandpass receivers on the Australia Telescope Compact Array. We have implemented a Bayesian inference model fitting routine to fit the data with internal free-free absorption (FFA), single- and double-component FFA in an external homogeneous medium, FFA in an external inhomogeneous medium, or single- and double-component synchrotron self-absorption models, all with and without a high-frequency exponential break. We find that without the inclusion of a high-frequency break these models cannot accurately fit the data, with significant deviations above and below the peak in the radio spectrum. The addition of a high-frequency break provides acceptable spectral fits for the inhomogeneous FFA and double-component synchrotron self-absorption models, with the inhomogeneous FFA model statistically favored. The requirement of a high-frequency spectral break implies that the source has ceased injecting fresh particles. Additional support for the inhomogeneous FFA model as being responsible for the turnover in the spectrum is given by the consistency between the physical parameters derived from the model fit and the implications of the exponential spectral break, such as the necessity of the source being surrounded by a dense ambient medium to maintain the peak frequency near the gigahertz region. This implies that PKS B0008-421 should display an internal H i column density greater than 1020 cm-2. The discovery of PKS B0008-421 suggests that the next generation of low radio frequency surveys could reveal a large population of GPS sources that have ceased activity, and that a portion of the ultra-steep-spectrum source population could be composed of these GPS sources in a relic phase.

  15. Non-invasive thermal IR detection of breast tumor development in vivo

    NASA Astrophysics Data System (ADS)

    Case, Jason R.; Young, Madison A.; Dréau, D.; Trammell, Susan R.

    2015-03-01

    Lumpectomy coupled with radiation therapy and/or chemotherapy comprises the treatment of breast cancer for many patients. We are developing an enhanced thermal IR imaging technique that can be used in real-time to guide tissue excision during a lumpectomy. This novel enhanced thermal imaging method is a combination of IR imaging (8- 10 μm) and selective heating of blood (~0.5 °C) relative to surrounding water-rich tissue using LED sources at low powers. Post-acquisition processing of these images highlights temporal changes in temperature and is sensitive to the presence of vascular structures. In this study, fluorescent and enhanced thermal imaging modalities were used to estimate breast cancer tumor volumes as a function of time in 19 murine subjects over a 30-day study period. Tumor volumes calculated from fluorescent imaging follow an exponential growth curve for the first 22 days of the study. Cell necrosis affected the tumor volume estimates based on the fluorescent images after Day 22. The tumor volumes estimated from enhanced thermal imaging show exponential growth over the entire study period. A strong correlation was found between tumor volumes estimated using fluorescent imaging and the enhanced IR images, indicating that enhanced thermal imaging is capable monitoring tumor growth. Further, the enhanced IR images reveal a corona of bright emission along the edges of the tumor masses. This novel IR technique could be used to estimate tumor margins in real-time during surgical procedures.

  16. On the Nature and History of Blue Amorphous Galaxies

    NASA Astrophysics Data System (ADS)

    Marlowe, Amanda True

    1998-07-01

    Dwarf galaxies play an important role in our understanding of galaxy formation and evolution. We have embarked on a systematic study of 12 blue amorphous galaxies (BAGs) whose properties suggest that they are dwarf galaxies in a starburst or post-burst state. It seems likely that BAGs are related to other 'starburst' dwarfs such as blue compact dwarfs (BCDs) and HII galaxies. The BAGs in our sample, however, are considerably closer than BCDs and HII galaxies in other samples, and therefore easier to study. These galaxies may offer important insights into dwarf galaxy evolution. In an effort to clarify the role of BAGs in evolutionary scenarios for dwarf galaxies, we present and analyze Hα and UBVI data for our sample. BAGs, like BCDs and HII galaxies, have surface brightness profiles that are exponential in the outer regions but have a predominantly blue central blue excess, suggesting a young burst in an older, redder galaxy. Seven of the galaxies have the bubble or filamentary Hα morphology and double peaked emission lines that are the signature of superbubbles or superwind activity. These galaxies are typically the ones with the strongest central excesses. The starbursting regions are young events compared to the older underlying galaxy, which follow an exponential surface brightness law. Not all of the galaxies develop superwinds: the appearance of superwinds is most sensitive to the concentration and rate of star formation in the starbursting core. The underlying exponential galaxies are very similar to those found in BCDs and HII galaxies, though the 'burst' colors are slightly redder than those found in HII galaxies. BAGs are structurally similar to BCDs and HII galaxies. How BAGs fit into the dwarf galaxy evolutionary debate is less clear. While some compact dIs have properties similar to those of the underlying exponential galaxy in our sample, issues such as mass loss from superwinds, the impact of the starbursting core on the underlying galaxy, and fading complicate the search for BAG progenitor and evolved or faded BAG galaxy classes.

  17. pyGFC - A Python Extension to the C++ Geodesy Foundation Classes

    DTIC Science & Technology

    2008-09-01

    imperative for a successful emulation of a dynamic MANET as intended. To achieve this objective, the same algorithm and its implementation for...solution has two options: (1) selecting, implementing, and integrating an appropriate algorithm into the tool and in the MANE system, or (2) using an...GFCCoord *estimate_southeast_coords(double, double); double lat; double lon; double alt; private: CEarth *earth; char * cstr

  18. Cross-bispectrum computation and variance estimation

    NASA Technical Reports Server (NTRS)

    Lii, K. S.; Helland, K. N.

    1981-01-01

    A method for the estimation of cross-bispectra of discrete real time series is developed. The asymptotic variance properties of the bispectrum are reviewed, and a method for the direct estimation of bispectral variance is given. The symmetry properties are described which minimize the computations necessary to obtain a complete estimate of the cross-bispectrum in the right-half-plane. A procedure is given for computing the cross-bispectrum by subdividing the domain into rectangular averaging regions which help reduce the variance of the estimates and allow easy application of the symmetry relationships to minimize the computational effort. As an example of the procedure, the cross-bispectrum of a numerically generated, exponentially distributed time series is computed and compared with theory.

  19. An increased estimate of the merger rate of double neutron stars from observations of a highly relativistic system.

    PubMed

    Burgay, M; D'Amico, N; Possenti, A; Manchester, R N; Lyne, A G; Joshi, B C; McLaughlin, M A; Kramer, M; Sarkissian, J M; Camilo, F; Kalogera, V; Kim, C; Lorimer, D R

    2003-12-04

    The merger of close binary systems containing two neutron stars should produce a burst of gravitational waves, as predicted by the theory of general relativity. A reliable estimate of the double-neutron-star merger rate in the Galaxy is crucial in order to predict whether current gravity wave detectors will be successful in detecting such bursts. Present estimates of this rate are rather low, because we know of only a few double-neutron-star binaries with merger times less than the age of the Universe. Here we report the discovery of a 22-ms pulsar, PSR J0737-3039, which is a member of a highly relativistic double-neutron-star binary with an orbital period of 2.4 hours. This system will merge in about 85 Myr, a time much shorter than for any other known neutron-star binary. Together with the relatively low radio luminosity of PSR J0737-3039, this timescale implies an order-of-magnitude increase in the predicted merger rate for double-neutron-star systems in our Galaxy (and in the rest of the Universe).

  20. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  1. Accuracy and equivalence testing of crown ratio models and assessment of their impact on diameter growth and basal area increment predictions of two variants of the Forest Vegetation Simulator

    Treesearch

    Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston

    2009-01-01

    Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...

  2. Sparse Recovery via Differential Inclusions

    DTIC Science & Technology

    2014-07-01

    2242. [Wai09] Martin J. Wainwright, Sharp thresholds for high-dimensional and noisy spar- sity recovery using l1 -constrained quadratic programming...solution, (1.11) βt = { 0, if t < 1/y; y(1− e−κ(t−1/y)), otherwise, which converges to the unbiased Bregman ISS estimator exponentially fast. Let us ...are not given the support set S, so the following two prop- erties are used to evaluate the performance of an estimator β̂. 1. Model selection

  3. Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Corless, Martin

    2004-01-01

    We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.

  4. Estimating the variance and integral scale of the transmissivity field using head residual increments

    USGS Publications Warehouse

    Zheng, Li; Silliman, Stephen E.

    2000-01-01

    A modification of previously published solutions regarding the spatial variation of hydraulic heads is discussed whereby the semivariogram of increments of head residuals (termed head residual increments HRIs) are related to the variance and integral scale of the transmissivity field. A first‐order solution is developed for the case of a transmissivity field which is isotropic and whose second‐order behavior can be characterized by an exponential covariance structure. The estimates of the variance σY2 and the integral scale λ of the log transmissivity field are then obtained via fitting a theoretical semivariogram for the HRI to its sample semivariogram. This approach is applied to head data sampled from a series of two‐dimensional, simulated aquifers with isotropic, exponential covariance structures and varying degrees of heterogeneity (σY2 = 0.25, 0.5, 1.0, 2.0, and 5.0). The results show that this method provided reliable estimates for both λ and σY2 in aquifers with the value of σY2 up to 2.0, but the errors in those estimates were higher for σY2 equal to 5.0. It is also demonstrated through numerical experiments and theoretical arguments that the head residual increments will provide a sample semivariogram with a lower variance than will the use of the head residuals without calculation of increments.

  5. Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.

    PubMed

    Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard

    2014-03-01

    Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Biased phylodynamic inferences from analysing clusters of viral sequences

    PubMed Central

    Xiang, Fei; Frost, Simon D. W.

    2017-01-01

    Abstract Phylogenetic methods are being increasingly used to help understand the transmission dynamics of measurably evolving viruses, including HIV. Clusters of highly similar sequences are often observed, which appear to follow a ‘power law’ behaviour, with a small number of very large clusters. These clusters may help to identify subpopulations in an epidemic, and inform where intervention strategies should be implemented. However, clustering of samples does not necessarily imply the presence of a subpopulation with high transmission rates, as groups of closely related viruses can also occur due to non-epidemiological effects such as over-sampling. It is important to ensure that observed phylogenetic clustering reflects true heterogeneity in the transmitting population, and is not being driven by non-epidemiological effects. We qualify the effect of using a falsely identified ‘transmission cluster’ of sequences to estimate phylodynamic parameters including the effective population size and exponential growth rate under several demographic scenarios. Our simulation studies show that taking the maximum size cluster to re-estimate parameters from trees simulated under a randomly mixing, constant population size coalescent process systematically underestimates the overall effective population size. In addition, the transmission cluster wrongly resembles an exponential or logistic growth model 99% of the time. We also illustrate the consequences of false clusters in exponentially growing coalescent and birth-death trees, where again, the growth rate is skewed upwards. This has clear implications for identifying clusters in large viral databases, where a false cluster could result in wasted intervention resources. PMID:28852573

  7. Dramatic changes in muscle contractile and structural properties after 2 botulinum toxin injections.

    PubMed

    Minamoto, Viviane B; Suzuki, Kentaro P; Bremner, Shannon N; Lieber, Richard L; Ward, Samuel R

    2015-10-01

    Botulinum toxin is frequently administered serially to maintain therapeutic muscle paralysis, but the effect of repeated doses on muscle function are largely unknown. This study characterized the muscle response to 2 onabotulinum toxin (BoNT) injections separated by 3 months. Animal subjects received a single toxin injection (n = 8), 2 BoNT injections separated by 3 months (n = 14), or 1 BoNT and 1 saline injection separated by 3 months (n = 8). The functional effect of 2 serial injections was exponentially greater than the effect of a single injection. While both groups treated with a single BoNT injection had decreased torque in the injected leg by approximately 50% relative to contralateral legs, the double BoNT injected group had decreased torque by over 95% relative to the preinjection level. Both single and double BoNT injections produced clear signs of fiber-type grouping. These experiments demonstrate a disproportionately greater effect of repeated BoNT injections. © 2015 Wiley Periodicals, Inc.

  8. Univariate and Bivariate Loglinear Models for Discrete Test Score Distributions.

    ERIC Educational Resources Information Center

    Holland, Paul W.; Thayer, Dorothy T.

    2000-01-01

    Applied the theory of exponential families of distributions to the problem of fitting the univariate histograms and discrete bivariate frequency distributions that often arise in the analysis of test scores. Considers efficient computation of the maximum likelihood estimates of the parameters using Newton's Method and computationally efficient…

  9. Practical extension of a Lake States tree height model

    Treesearch

    Don C. Bragg

    2008-01-01

    By adapting data from national and state champion lists and the predictions of an existing height model, an exponential function was developed to improvetree height estimation. As a case study, comparisons between the original and redesigned model were made with eastern white pine (Pinus strobus L.). Forexample, the heights...

  10. Baldcypress Height-Diamter Equations and Their Prediction Confindence Intervals

    Treesearch

    Bernard R. Parresol

    1992-01-01

    Height-diameter relationships are an important component in yield estimation, stand description, and damage appraisals. A nonlinear exponential function used extensively in the northwest United States was chosen for bald cypress (Taxodium distichum (L.) Rich.). Homogeneity and normality of residuals were examined, and the function as well as the...

  11. Split exponential track length estimator for Monte-Carlo simulations of small-animal radiation therapy

    NASA Astrophysics Data System (ADS)

    Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.

    2014-12-01

    We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.

  12. Quantitative differentiation of breast lesions at 3T diffusion-weighted imaging (DWI) using the ratio of distributed diffusion coefficient (DDC).

    PubMed

    Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin

    2016-12-01

    To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.

  13. Intra-Individual Response Variability Assessed by Ex-Gaussian Analysis may be a New Endophenotype for Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Henríquez-Henríquez, Marcela Patricia; Billeke, Pablo; Henríquez, Hugo; Zamorano, Francisco Javier; Rothhammer, Francisco; Aboitiz, Francisco

    2014-01-01

    Intra-individual variability of response times (RTisv) is considered as potential endophenotype for attentional deficit/hyperactivity disorder (ADHD). Traditional methods for estimating RTisv lose information regarding response times (RTs) distribution along the task, with eventual effects on statistical power. Ex-Gaussian analysis captures the dynamic nature of RTisv, estimating normal and exponential components for RT distribution, with specific phenomenological correlates. Here, we applied ex-Gaussian analysis to explore whether intra-individual variability of RTs agrees with criteria proposed by Gottesman and Gould for endophenotypes. Specifically, we evaluated if normal and/or exponential components of RTs may (a) present the stair-like distribution expected for endophenotypes (ADHD > siblings > typically developing children (TD) without familiar history of ADHD) and (b) represent a phenotypic correlate for previously described genetic risk variants. This is a pilot study including 55 subjects (20 ADHD-discordant sibling-pairs and 15 TD children), all aged between 8 and 13 years. Participants resolved a visual Go/Nogo with 10% Nogo probability. Ex-Gaussian distributions were fitted to individual RT data and compared among the three samples. In order to test whether intra-individual variability may represent a correlate for previously described genetic risk variants, VNTRs at DRD4 and SLC6A3 were identified in all sibling-pairs following standard protocols. Groups were compared adjusting independent general linear models for the exponential and normal components from the ex-Gaussian analysis. Identified trends were confirmed by the non-parametric Jonckheere-Terpstra test. Stair-like distributions were observed for μ (p = 0.036) and σ (p = 0.009). An additional "DRD4-genotype" × "clinical status" interaction was present for τ (p = 0.014) reflecting a possible severity factor. Thus, normal and exponential RTisv components are suitable as ADHD endophenotypes.

  14. Using simple environmental variables to estimate below-ground productivity in grasslands

    USGS Publications Warehouse

    Gill, R.A.; Kelly, R.H.; Parton, W.J.; Day, K.A.; Jackson, R.B.; Morgan, J.A.; Scurlock, J.M.O.; Tieszen, L.L.; Castle, J.V.; Ojima, D.S.; Zhang, X.S.

    2002-01-01

    In many temperate and annual grasslands, above-ground net primary productivity (NPP) can be estimated by measuring peak above-ground biomass. Estimates of below-ground net primary productivity and, consequently, total net primary productivity, are more difficult. We addressed one of the three main objectives of the Global Primary Productivity Data Initiative for grassland systems to develop simple models or algorithms to estimate missing components of total system NPP. Any estimate of below-ground NPP (BNPP) requires an accounting of total root biomass, the percentage of living biomass and annual turnover of live roots. We derived a relationship using above-ground peak biomass and mean annual temperature as predictors of below-ground biomass (r2 = 0.54; P = 0.01). The percentage of live material was 0.6, based on published values. We used three different functions to describe root turnover: constant, a direct function of above-ground biomass, or as a positive exponential relationship with mean annual temperature. We tested the various models against a large database of global grassland NPP and the constant turnover and direct function models were approximately equally descriptive (r2 = 0.31 and 0.37), while the exponential function had a stronger correlation with the measured values (r2 = 0.40) and had a better fit than the other two models at the productive end of the BNPP gradient. When applied to extensive data we assembled from two grassland sites with reliable estimates of total NPP, the direct function was most effective, especially at lower productivity sites. We provide some caveats for its use in systems that lie at the extremes of the grassland gradient and stress that there are large uncertainties associated with measured and modelled estimates of BNPP.

  15. Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1972-01-01

    Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.

  16. Feasibility of protein turnover studies in prototroph Saccharomyces cerevisiae strains.

    PubMed

    Martin-Perez, Miguel; Villén, Judit

    2015-04-07

    Quantitative proteomics studies of yeast that use metabolic labeling with amino acids rely on auxotrophic mutations of one or more genes on the amino acid biosynthesis pathways. These mutations affect yeast metabolism and preclude the study of some biological processes. Overcoming this limitation, it has recently been described that proteins in a yeast prototrophic strain can also be metabolically labeled with heavy amino acids. However, the temporal profiles of label incorporation under the different phases of the prototroph's growth have not been examined. Labeling trajectories are important in the study of protein turnover and dynamics, in which label incorporation into proteins is monitored across many time points. Here we monitored protein labeling trajectories for 48 h after a pulse with heavy lysine in a yeast prototrophic strain and compared them with those of a lysine auxotrophic yeast. Labeling was successful in prototroph yeast during exponential growth phase but not in stationary phase. Furthermore, we were able to determine the half-lives of more than 1700 proteins during exponential phase of growth with high accuracy and reproducibility. We found a median half-life of 2 h in both strains, which corresponds with the cellular doubling time. Nucleolar and ribosomal proteins showed short half-lives, whereas mitochondrial proteins and other energy production enzymes presented longer half-lives. Except for some proteins involved in lysine biosynthesis, we observed a high correlation in protein half-lives between prototroph and auxotroph strains. Overall, our results demonstrate the feasibility of using prototrophs for proteomic turnover studies and provide a reliable data set of protein half-lives in exponentially growing yeast.

  17. fRMSDPred: Predicting Local RMSD Between Structural Fragments Using Sequence Information

    DTIC Science & Technology

    2007-04-04

    machine learning approaches for estimating the RMSD value of a pair of protein fragments. These estimated fragment-level RMSD values can be used to construct the alignment, assess the quality of an alignment, and identify high-quality alignment segments. We present algorithms to solve this fragment-level RMSD prediction problem using a supervised learning framework based on support vector regression and classification that incorporates protein profiles, predicted secondary structure, effective information encoding schemes, and novel second-order pairwise exponential kernel

  18. A simple model for the estimation of rain-induced attenuation along earth-space paths at millimeter wavelengths

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Dishman, W. K.

    1982-01-01

    A simple attenuation model (SAM) is presented for estimating rain-induced attenuation along an earth-space path. The rain model uses an effective spatial rain distribution which is uniform for low rain rates and which has an exponentially shaped horizontal rain profile for high rain rates. When compared to other models, the SAM performed well in the important region of low percentages of time, and had the lowest percent standard deviation of all percent time values tested.

  19. Modeling the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity values.

    PubMed

    Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen

    2010-05-01

    The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.

  20. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  1. Phytoplankton productivity in relation to light intensity: A simple equation

    USGS Publications Warehouse

    Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.

    1987-01-01

    A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.

  2. A simple approach to measure transmissibility and forecast incidence.

    PubMed

    Nouvellet, Pierre; Cori, Anne; Garske, Tini; Blake, Isobel M; Dorigatti, Ilaria; Hinsley, Wes; Jombart, Thibaut; Mills, Harriet L; Nedjati-Gilani, Gemma; Van Kerkhove, Maria D; Fraser, Christophe; Donnelly, Christl A; Ferguson, Neil M; Riley, Steven

    2018-03-01

    Outbreaks of novel pathogens such as SARS, pandemic influenza and Ebola require substantial investments in reactive interventions, with consequent implementation plans sometimes revised on a weekly basis. Therefore, short-term forecasts of incidence are often of high priority. In light of the recent Ebola epidemic in West Africa, a forecasting exercise was convened by a network of infectious disease modellers. The challenge was to forecast unseen "future" simulated data for four different scenarios at five different time points. In a similar method to that used during the recent Ebola epidemic, we estimated current levels of transmissibility, over variable time-windows chosen in an ad hoc way. Current estimated transmissibility was then used to forecast near-future incidence. We performed well within the challenge and often produced accurate forecasts. A retrospective analysis showed that our subjective method for deciding on the window of time with which to estimate transmissibility often resulted in the optimal choice. However, when near-future trends deviated substantially from exponential patterns, the accuracy of our forecasts was reduced. This exercise highlights the urgent need for infectious disease modellers to develop more robust descriptions of processes - other than the widespread depletion of susceptible individuals - that produce non-exponential patterns of incidence. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.

    PubMed

    García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A

    2017-01-01

    A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.

  4. Model-free and analytical EAP reconstruction via spherical polar Fourier diffusion MRI.

    PubMed

    Cheng, Jian; Ghosh, Aurobrata; Jiang, Tianzi; Deriche, Rachid

    2010-01-01

    How to estimate the diffusion Ensemble Average Propagator (EAP) from the DWI signals in q-space is an open problem in diffusion MRI field. Many methods were proposed to estimate the Orientation Distribution Function (ODF) that is used to describe the fiber direction. However, ODF is just one of the features of the EAP. Compared with ODF, EAP has the full information about the diffusion process which reflects the complex tissue micro-structure. Diffusion Orientation Transform (DOT) and Diffusion Spectrum Imaging (DSI) are two important methods to estimate the EAP from the signal. However, DOT is based on mono-exponential assumption and DSI needs a lot of samplings and very large b values. In this paper, we propose Spherical Polar Fourier Imaging (SPFI), a novel model-free fast robust analytical EAP reconstruction method, which almost does not need any assumption of data and does not need too many samplings. SPFI naturally combines the DWI signals with different b-values. It is an analytical linear transformation from the q-space signal to the EAP profile represented by Spherical Harmonics (SH). We validated the proposed methods in synthetic data, phantom data and real data. It works well in all experiments, especially for the data with low SNR, low anisotropy, and non-exponential decay.

  5. DOUBLE POWER LAWS IN THE EVENT-INTEGRATED SOLAR ENERGETIC PARTICLE SPECTRUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Lulu; Zhang, Ming; Rassoul, Hamid K., E-mail: lzhao@fit.edu

    2016-04-10

    A double power law or a power law with exponential rollover at a few to tens of MeV nucleon{sup −1} of the event-integrated differential spectra has been reported in many solar energetic particle (SEP) events. The rollover energies per nucleon of different elements correlate with a particle's charge-to-mass ratio (Q/A). The probable causes are suggested as residing in shock finite lifetimes, shock finite sizes, shock geometry, and an adiabatic cooling effect. In this work, we conduct a numerical simulation to investigate a particle's transport process in the inner heliosphere. We solve the focused transport equation using a time-backward Markov stochasticmore » approach. The convection, magnetic focusing, adiabatic cooling effect, and pitch-angle scattering are included. The effects that the interplanetary turbulence imposes on the shape of the resulting SEP spectra are examined. By assuming a pure power-law differential spectrum at the Sun, a perfect double-power-law feature with a break energy ranging from 10 to 120 MeV nucleon{sup −1} is obtained at 1 au. We found that the double power law of the differential energy spectrum is a robust result of SEP interplanetary propagation. It works for many assumptions of interplanetary turbulence spectra that give various forms of momentum dependence of a particle's mean free path. The different spectral shapes in low-energy and high-energy ends are not just a transition from the convection-dominated propagation to diffusion-dominated propagation.« less

  6. Finite-time singularity signature of hyperinflation

    NASA Astrophysics Data System (ADS)

    Sornette, D.; Takayasu, H.; Zhou, W.-X.

    2003-07-01

    We present a novel analysis extending the recent work of Mizuno et al. (Physica A 308 (2002) 411) on the hyperinflations of Germany (1920/1/1-1923/11/1), Hungary (1945/4/30-1946/7/15), Brazil (1969-1994), Israel (1969-1985), Nicaragua (1969-1991), Peru (1969-1990) and Bolivia (1969-1985). On the basis of a generalization of Cagan's model of inflation based on the mechanism of “inflationary expectation” of positive feedbacks between realized growth rate and people's expected growth rate, we find that hyperinflations can be characterized by a power law singularity culminating at a critical time tc. Mizuno et al.'s double-exponential function can be seen as a discrete time-step approximation of our more general non-linear ODE formulation of the price dynamics which exhibits a finite-time singular behavior. This extension of Cagan's model, which makes natural the appearance of a critical time tc, has the advantage of providing a well-defined end of the clearly unsustainable hyperinflation regime. We find an excellent and reliable agreement between theory and data for Germany, Hungary, Peru and Bolivia. For Brazil, Israel and Nicaragua, the super-exponential growth seems to be already contaminated significantly by the existence of a cross-over to a stationary regime.

  7. Reciprocal capacitance transients?

    NASA Astrophysics Data System (ADS)

    Gfroerer, Tim; Simov, Peter; Wanlass, Mark

    2007-03-01

    When the reverse bias across a semiconductor diode is changed, charge carriers move to accommodate the appropriate depletion thickness, producing a simultaneous change in the device capacitance. Transient capacitance measurements can reveal inhibited carrier motion due to trapping, where the depth of the trap can be evaluated using the temperature-dependent escape rate. However, when we employ this technique on a GaAs0.72P0.28 n+/p diode (which is a candidate for incorporation in multi-junction solar cells), we observe a highly non-exponential response under a broad range of experimental conditions. Double exponential functions give good fits, but lead to non-physical results. The deduced rates depend on the observation time window and fast and slow rates, which presumably correspond to deep and shallow levels, have identical activation energies. Meanwhile, we have discovered a universal linear relationship between the inverse of the capacitance and time. An Arrhenius plot of the slope of the reciprocal of the transient yields an activation energy of approximately 0.4 eV, independent of the observation window and other experimental conditions. The reciprocal behavior leads us to hypothesize that hopping, rather than escape into high-mobility bands, may govern the transport of trapped holes in this system.

  8. Control of Growth Rate by Initial Substrate Concentration at Values Below Maximum Rate

    PubMed Central

    Gaudy, Anthony F.; Obayashi, Alan; Gaudy, Elizabeth T.

    1971-01-01

    The hyperbolic relationship between specific growth rate, μ, and substrate concentration, proposed by Monod and used since as the basis for the theory of steady-state growth in continuous-flow systems, was tested experimentally in batch cultures. Use of a Flavobacterium sp. exhibiting a high saturation constant for growth in glucose minimal medium allowed direct measurement of growth rate and substrate concentration throughout the growth cycle in medium containing a rate-limiting initial concentration of glucose. Specific growth rates were also measured for a wide range of initial glucose concentrations. A plot of specific growth rate versus initial substrate concentration was found to fit the hyperbolic equation. However, the instantaneous relationship between specific growth rate and substrate concentration during growth, which is stated by the equation, was not observed. Well defined exponential growth phases were developed at initial substrate concentrations below that required for support of the maximum exponential growth rate and a constant doubling time was maintained until 50% of the substrate had been used. It is suggested that the external substrate concentration initially present “sets” the specific growth rate by establishing a steady-state internal concentration of substrate, possibly through control of the number of permeation sites. PMID:5137579

  9. Exponential growth and selection in self-replicating materials from DNA origami rafts

    NASA Astrophysics Data System (ADS)

    He, Xiaojin; Sha, Ruojie; Zhuo, Rebecca; Mi, Yongli; Chaikin, Paul M.; Seeman, Nadrian C.

    2017-10-01

    Self-replication and evolution under selective pressure are inherent phenomena in life, and but few artificial systems exhibit these phenomena. We have designed a system of DNA origami rafts that exponentially replicates a seed pattern, doubling the copies in each diurnal-like cycle of temperature and ultraviolet illumination, producing more than 7 million copies in 24 cycles. We demonstrate environmental selection in growing populations by incorporating pH-sensitive binding in two subpopulations. In one species, pH-sensitive triplex DNA bonds enable parent-daughter templating, while in the second species, triplex binding inhibits the formation of duplex DNA templating. At pH 5.3, the replication rate of species I is ~1.3-1.4 times faster than that of species II. At pH 7.8, the replication rates are reversed. When mixed together in the same vial, the progeny of species I replicate preferentially at pH 7.8 similarly at pH 5.3, the progeny of species II take over the system. This addressable selectivity should be adaptable to the selection and evolution of multi-component self-replicating materials in the nanoscopic-to-microscopic size range.

  10. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  11. [Litter decomposition and nutrient release in Acacia mangium plantations established on degraded soils of Colombia].

    PubMed

    Castellanos-Barliza, Jeiner; León Peláez, Juan Diego

    2011-03-01

    Several factors control the decomposition in terrestrial ecosystems such as humidity, temperature, quality of litter and microbial activity. We investigated the effects of rainfall and soil plowing prior to the establishment of Acacia mangium plantations, using the litterbag technique, during a six month period, in forests plantations in Bajo Cauca region, Colombia. The annual decomposition constants (k) of simple exponential model, oscillated between 1.24 and 1.80, meanwhile k1 y k2 decomposition constants of double exponential model were 0.88-1.81 and 0.58-7.01. At the end of the study, the mean residual dry matter (RDM) was 47% of the initial value for the three sites. We found a slow N, Ca and Mg release pattern from the A. mangium leaf litter, meanwhile, phosphorus (P) showed a dominant immobilization phase, suggesting its low availability in soils. Chemical leaf litter quality parameters (e.g. N and P concentrations, C/N, N/P ratios and phenols content) showed an important influence on decomposition rates. The results of this study indicated that rainfall plays an important role on the decomposition process, but not soil plowing.

  12. Scattering of acoustic evanescent waves by circular cylinders: Partial wave series solution

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.

    2002-05-01

    Evanescent acoustical waves occur in a variety of situations such as when sound is incident on a fluid interface beyond the critical angle and when flexural waves on a plate are subsonic with respect to the surrounding fluid. The scattering by circular cylinders at normal incidence was calculated to give insight into the consequences on the scattering of the evanescence of the incident wave. To analyze the scattering, it is necessary to express the incident wave using a modified expansion involving cylindrical functions. For plane evanescent waves, the expansion becomes a double summation with products of modified and ordinary Bessel functions. The resulting modified series is found for the scattering by a fluid cylinder in an unbounded medium. The perfectly soft and rigid cases are also examined. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on the transverse coordinate. The associated exponential dependence of the scattering on the location of a scatterer was previously demonstrated [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].

  13. Observations, Analysis, and Orbital Calculation of the Visual Double Star STTA 123 AB

    NASA Astrophysics Data System (ADS)

    Brashear, Nicholas; Camama, Angel; Drake, Miles; Smith, Miranda; Johnson, Jolyon; Arnold, Dave; Chamberlain, Rebecca

    2012-04-01

    As part of a research workshop at Pine Mountain Observatory, four students from Evergreen State College met with an instructor and an experienced double star observer to learn the methods used to measure double stars and to contribute observations to the Washington Double Star (WDS) Catalog. The students then observed and analyzed the visual double star STTA 123 AB with few past observations in the WDS Catalog to determine if it is optical or binary in nature. The separation of this double star was found to be 69.9" and its position angle to be 148.0°. Using the spectral types, stellar parallaxes, and proper motion vectors of these two stars, the students determined that this double star is likely physically bound by gravity in a binary system. Johnson calculated a preliminary circular orbit for the system using Newton's version of Kepler's third law. The masses of the two stars were estimated based on their spectral types (F0) to be 1.4 Msun. Their separation was estimated to be 316 AU based on their distance from Earth (about 216.5 light years) and their orbital period was estimated to be 3357 years. Arnold compared the observations made by the students to what would be predicted by the orbit calculation. A discrepancy of 14° was found in the position angle. The authors suggest that the orbit is both eccentric and inclined to our line of sight, making the observed position angle change less than predicted.

  14. Costs of achieving live birth from assisted reproductive technology: a comparison of sequential single and double embryo transfer approaches.

    PubMed

    Crawford, Sara; Boulet, Sheree L; Mneimneh, Allison S; Perkins, Kiran M; Jamieson, Denise J; Zhang, Yujia; Kissin, Dmitry M

    2016-02-01

    To assess treatment and pregnancy/infant-associated medical costs and birth outcomes for assisted reproductive technology (ART) cycles in a subset of patients using elective double embryo (ET) and to project the difference in costs and outcomes had the cycles instead been sequential single ETs (fresh followed by frozen if the fresh ET did not result in live birth). Retrospective cohort study using 2012 and 2013 data from the National ART Surveillance System. Infertility treatment centers. Fresh, autologous double ETs performed in 2012 among ART patients younger than 35 years of age with no prior ART use who cryopreserved at least one embryo. Sequential single and double ETs. Actual live birth rates and estimated ART treatment and pregnancy/infant-associated medical costs for double ET cycles started in 2012 and projected ART treatment and pregnancy/infant-associated medical costs if the double ET cycles had been performed as sequential single ETs. The estimated total ART treatment and pregnancy/infant-associated medical costs were $580.9 million for 10,001 double ETs started in 2012. If performed as sequential single ETs, estimated costs would have decreased by $195.0 million to $386.0 million, and live birth rates would have increased from 57.7%-68.0%. Sequential single ETs, when clinically appropriate, can reduce total ART treatment and pregnancy/infant-associated medical costs by reducing multiple births without lowering live birth rates. Published by Elsevier Inc.

  15. On the use of star-shaped genealogies in inference of coalescence times.

    PubMed Central

    Rosenberg, Noah A; Hirsh, Aaron E

    2003-01-01

    Genealogies from rapidly growing populations have approximate "star" shapes. We study the degree to which this approximation holds in the context of estimating the time to the most recent common ancestor (T(MRCA)) of a set of lineages. In an exponential growth scenario, we find that unless the product of population size (N) and growth rate (r) is at least approximately 10(5), the "pairwise comparison estimator" of T(MRCA) that derives from the star genealogy assumption has bias of 10-50%. Thus, the estimator is appropriate only for large populations that have grown very rapidly. The "tree-length estimator" of T(MRCA) is more biased than the pairwise comparison estimator, having low bias only for extremely large values of Nr. PMID:12930771

  16. Exercise modality effect on oxygen uptake off-transient kinetics at maximal oxygen uptake intensity.

    PubMed

    Sousa, Ana; Rodríguez, Ferran A; Machado, Leandro; Vilas-Boas, J Paulo; Fernandes, Ricardo J

    2015-06-01

    What is the central question of this study? Do the mechanical differences between swimming, rowing, running and cycling have a potential effect on the oxygen uptake (V̇O2) off-kinetics after an exercise sustained until exhaustion at 100% of maximal oxygen uptake (V̇O2max) intensity? What is the main finding and its importance? The mechanical differences between exercise modes had a potential effect and contributed to distinct amplitude of the fast component (higher in running compared with cycling) and time constant (higher in swimming compared with rowing and cycling) in the V̇O2 off-kinetic patterns at 100% of V̇O2max intensity. This suggests that swimmers, unlike rowers and cyclists, would benefit more from a longer duration of training intervals after each set of exercise performed at V̇O2max intensity. The kinetics of oxygen uptake (V̇O2) during recovery (off-transient kinetics) for different exercise modes is largely unexplored, hampering the prescription of training and recovery to enhance performance. The purpose of this study was to compare the V̇O2 off-transient kinetics response between swimmers, rowers, runners and cyclists during their specific mode of exercise at 100% of maximal oxygen uptake (V̇O2max) intensity and to examine the on-off symmetry. Groups of swimmers, rowers, runners and cyclists (n = 8 per group) performed (i) an incremental exercise protocol to assess the velocity or power associated with V̇O2max (vV̇O2max or wV̇O2max, respectively) and (ii) a square-wave exercise transition from rest to vV̇O2max/vV̇O2maxwV̇O2maxwV̇O2max until volitional exhaustion. Pulmonary exchange parameters were measured using a telemetric portable gas analyser (K4b(2) ; Cosmed, Rome, Italy), and the on- and off-transient kinetics were analysed through a double-exponential approach. For all exercise modes, both transient periods were symmetrical in shape once they had both been adequately fitted by a double-exponential model. However, differences were found in the off-kinetic parameters between exercise modes; the amplitude of the fast component of the V̇O2 off-response was higher in running compared with cycling (48 ± 5 and 36 ± 7 ml kg(-1) min(-1) , respectively; P < 0.001), and the time constant of the same phase was higher in swimming compared with rowing and cycling (63 ± 5, 56 ± 5 and 55 ± 3 s, respectively; P < 0.001). Although both phases were well described by a double-exponential model, the differences between exercise modes had a potential effect and contributed to distinct V̇O2 off-transient kinetic patterns at 100% of V̇O2max intensity. © 2015 The Authors. Experimental Physiology © 2015 The Physiological Society.

  17. Use and interpretation of logistic regression in habitat-selection studies

    USGS Publications Warehouse

    Keating, Kim A.; Cherry, Steve

    2004-01-01

     Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.

  18. Enhanced Tracking of Ballistic Targets Using Forward Looking Infrared Measurements with Active Target Illumination

    DTIC Science & Technology

    1989-12-01

    known a priori or could be estimated in real time. To overcome these disadvantages, Kalman filtering methodology has been incorporated into the...operator G(fx,fv) =F((xy) After centering, the data is incorporated into the template using the exponential smoothi -g technique of Equa- tion (3-11). It

  19. Facilities Management in Higher Education: Doing More with Less.

    ERIC Educational Resources Information Center

    Casey, John M.

    This analysis looked at higher education facilities management that, despite exponential growth in responsibilities since the 1960s, has seen reduced resources for operations and maintenance. By extrapolating 1988 data from the National Center for Education Statistics, the review estimated that there are now 3.4 billion square feet of higher…

  20. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  1. Stochastic modelling of intermittent fluctuations in the scrape-off layer: Correlations, distributions, level crossings, and moment estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, O. E., E-mail: odd.erik.garcia@uit.no; Kube, R.; Theodorsen, A.

    A stochastic model is presented for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas. The fluctuations in the plasma density are modeled by a super-position of uncorrelated pulses with fixed shape and duration, describing radial motion of blob-like structures. In the case of an exponential pulse shape and exponentially distributed pulse amplitudes, predictions are given for the lowest order moments, probability density function, auto-correlation function, level crossings, and average times for periods spent above and below a given threshold level. Also, the mean squared errors on estimators of sample mean and variance for realizations of the process bymore » finite time series are obtained. These results are discussed in the context of single-point measurements of fluctuations in the scrape-off layer, broad density profiles, and implications for plasma–wall interactions due to the transient transport events in fusion grade plasmas. The results may also have wide applications for modelling fluctuations in other magnetized plasmas such as basic laboratory experiments and ionospheric irregularities.« less

  2. Metallurgical Laboratory (MetLab) Treatability Study: An Analysis of Passive Soil Vapor Extraction Wells (PSVE) FY1999 Update

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riha, B.D.

    1999-10-20

    The results to date on the treatability study of the PSVE system at the MetLab of the Savannah River Site (SRS) indicate the technology is performing well. Well concentrations are decreasing and contour maps of the vadose zone soil gas plume show a decrease in the extent of the plume. In the 18 months of operation approximately 200 pounds of chlorinated organic contaminants have been removed by natural barometric pumping of wells fitted with BaroBall valves (low pressure check valves). The mass removal estimates are approximate since the flow rates are estimated, the concentration data is based on exponential fitsmore » of a limited data set, and the concentration data is normalized to the average CO2.The concentration values presented in this report should be taken as the general trend or order of magnitude of concentration until longer-term data is collected. These trends are of exponentially decreasing concentration showing the same characteristics as the concentration trends at the SRS Miscellaneous Chemical Basin after three years of PSVE (Riha et. al., 1999).« less

  3. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  4. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  5. An efficiency-decay model for Lumen maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.

    Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less

  6. Comparison of several maneuvering target tracking models

    NASA Astrophysics Data System (ADS)

    McIntyre, Gregory A.; Hintz, Kenneth J.

    1998-07-01

    The tracking of maneuvering targets is complicated by the fact that acceleration is not directly observable or measurable. Additionally, acceleration can be induced by a variety of sources including human input, autonomous guidance, or atmospheric disturbances. The approaches to tracking maneuvering targets can be divided into two categories both of which assume that the maneuver input command is unknown. One approach is to model the maneuver as a random process. The other approach assumes that the maneuver is not random and that it is either detected or estimated in real time. The random process models generally assume one of two statistical properties, either white noise or an autocorrelated noise. The multiple-model approach is generally used with the white noise model while a zero-mean, exponentially correlated acceleration approach is used with the autocorrelated noise model. The nonrandom approach uses maneuver detection to correct the state estimate or a variable dimension filter to augment the state estimate with an extra state component during a detected maneuver. Another issue with the tracking of maneuvering target is whether to perform the Kalman filter in Polar or Cartesian coordinates. This paper will examine and compare several exponentially correlated acceleration approaches in both Polar and Cartesian coordinates for accuracy and computational complexity. They include the Singer model in both Polar and Cartesian coordinates, the Singer model in Polar coordinates converted to Cartesian coordinates, Helferty's third order rational approximation of the Singer model and the Bar-Shalom and Fortmann model. This paper shows that these models all provide very accurate position estimates with only minor differences in velocity estimates and compares the computational complexity of the models.

  7. An efficiency-decay model for Lumen maintenance

    DOE PAGES

    Bobashev, Georgiy; Baldasaro, Nicholas G.; Mills, Karmann C.; ...

    2016-08-25

    Proposed is a multicomponent model for the estimation of light-emitting diode (LED) lumen maintenance using test data that were acquired in accordance with the test standards of the Illumination Engineering Society of North America, i.e., LM-80-08. Lumen maintenance data acquired with this test do not always follow exponential decay, particularly data collected in the first 1000 h or under low-stress (e.g., low temperature) conditions. This deviation from true exponential behavior makes it difficult to use the full data set in models for the estimation of lumen maintenance decay coefficient. As a result, critical information that is relevant to the earlymore » life or low-stress operation of LED light sources may be missed. We present an efficiency-decay model approach, where all lumen maintenance data can be used to provide an alternative estimate of the decay rate constant. The approach considers a combined model wherein one part describes an initial “break-in” period and another part describes the decay in lumen maintenance. During the break-in period, several mechanisms within the LED can act to produce a small (typically <; 10%) increase in luminous flux. The effect of the break-in period and its longevity is more likely to be present at low-ambient temperatures and currents, where the discrepancy between a standard TM-21 approach and our proposed model is the largest. For high temperatures and currents, the difference between the estimates becomes nonsubstantial. Finally, our approach makes use of all the collected data and avoids producing unrealistic estimates of the decay coefficient.« less

  8. Estimating initial contaminant mass based on fitting mass-depletion functions to contaminant mass discharge data: Testing method efficacy with SVE operations data

    NASA Astrophysics Data System (ADS)

    Mainhagu, J.; Brusseau, M. L.

    2016-09-01

    The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.

  9. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.

  10. Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.

    PubMed

    Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann

    2012-02-01

    During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Individuality and universality in the growth-division laws of single E. coli cells

    NASA Astrophysics Data System (ADS)

    Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco

    2016-01-01

    The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.

  12. H∞ state estimation for discrete-time memristive recurrent neural networks with stochastic time-delays

    NASA Astrophysics Data System (ADS)

    Liu, Hongjian; Wang, Zidong; Shen, Bo; Alsaadi, Fuad E.

    2016-07-01

    This paper deals with the robust H∞ state estimation problem for a class of memristive recurrent neural networks with stochastic time-delays. The stochastic time-delays under consideration are governed by a Bernoulli-distributed stochastic sequence. The purpose of the addressed problem is to design the robust state estimator such that the dynamics of the estimation error is exponentially stable in the mean square, and the prescribed ? performance constraint is met. By utilizing the difference inclusion theory and choosing a proper Lyapunov-Krasovskii functional, the existence condition of the desired estimator is derived. Based on it, the explicit expression of the estimator gain is given in terms of the solution to a linear matrix inequality. Finally, a numerical example is employed to demonstrate the effectiveness and applicability of the proposed estimation approach.

  13. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  14. Gas production in the Barnett Shale obeys a simple scaling theory

    PubMed Central

    Patzek, Tad W.; Male, Frank; Marder, Michael

    2013-01-01

    Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States’ oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet. PMID:24248376

  15. Effects of Economy Type and Nicotine on the Essential Value of Food in Rats

    PubMed Central

    Cassidy, Rachel N; Dallery, Jesse

    2012-01-01

    The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action. PMID:22389525

  16. Effects of Ethanol and Other Alkanols on Transport of Acetic Acid in Saccharomyces cerevisiae

    PubMed Central

    Casal, Margarida; Cardoso, Helena; Leão, Cecília

    1998-01-01

    In glucose-grown cells of Saccharomyces cerevisiae IGC 4072, acetic acid enters only by simple diffusion of the undissociated acid. In these cells, ethanol and other alkanols enhanced the passive influx of labelled acetic acid. The influx of the acid followed first-order kinetics with a rate constant that increased exponentially with the alcohol concentration, and an exponential enhancement constant for each alkanol was estimated. The intracellular concentration of labelled acetic acid was also enhanced by alkanols, and the effect increased exponentially with alcohol concentration. Acetic acid is transported across the plasma membrane of acetic acid-, lactic acid-, and ethanol-grown cells by acetate-proton symports. We found that in these cells ethanol and butanol inhibited the transport of labelled acetic acid in a noncompetitive way; the maximum transport velocity decreased with alcohol concentration, while the affinity of the system for acetate was not significantly affected by the alcohol. Semilog plots of Vmax versus alcohol concentration yielded straight lines with negative slopes from which estimates of the inhibition constant for each alkanol could be obtained. The intracellular concentration of labelled acid was significantly reduced in the presence of ethanol or butanol, and the effect increased with the alcohol concentration. We postulate that the absence of an operational carrier for acetate in glucose-grown cells of S. cerevisiae, combined with the relatively high permeability of the plasma membrane for the undissociated acid and the inability of the organism to metabolize acetic acid, could be one of the reasons why this species exhibits low tolerance to acidic environments containing ethanol. PMID:9464405

  17. Effects of economy type and nicotine on the essential value of food in rats.

    PubMed

    Cassidy, Rachel N; Dallery, Jesse

    2012-03-01

    The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand for food across a sequence of fixed-ratio schedules was assessed during open and closed economy conditions and across one- and two-pellet per reinforcer delivery conditions. The exponential equation was fitted to the relation between fixed-ratio size and the logarithm of the absolute number of reinforcers. Estimates of the rate of change in elasticity of food, the proposed measure of essential value, were compared across conditions. Essential value was equivalent across magnitudes during the closed economy, but showed a slight decrease across magnitudes during the open economy. Experiment 2 explored the behavioral mechanisms of nicotine's effects on consumption with the results from Experiment 1 serving as a within-subject frame of reference. The same subjects were administered nicotine via subcutaneously implanted osmotic minipumps at a dose of 3 mg/kg/day and exposed to both the one- and two-pellet conditions under a closed economy. Although nicotine produced large decreases in demand, essential value was not significantly changed. The data from the present experiments provide further evidence for the adequacy of the exponential demand equation as a tool for quantifying the rate of change in elasticity of a good and for assessing behavioral mechanisms of drug action.

  18. Gas production in the Barnett Shale obeys a simple scaling theory.

    PubMed

    Patzek, Tad W; Male, Frank; Marder, Michael

    2013-12-03

    Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States' oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet.

  19. Discharging dynamics in an electrolytic cell

    NASA Astrophysics Data System (ADS)

    Feicht, Sarah E.; Frankel, Alexandra E.; Khair, Aditya S.

    2016-07-01

    We analyze the dynamics of a discharging electrolytic cell comprised of a binary symmetric electrolyte between two planar, parallel blocking electrodes. When a voltage is initially applied, ions in the electrolyte migrate towards the electrodes, forming electrical double layers. After the system reaches steady state and the external current decays to zero, the applied voltage is switched off and the cell discharges, with the ions eventually returning to a uniform spatial concentration. At voltages on the order of the thermal voltage VT=kBT /q ≃25 mV, where kB is Boltzmann's constant, T is temperature, and q is the charge of a proton, experiments on surfactant-doped nonpolar fluids observe that the temporal evolution of the external current during charging and discharging is not symmetric [V. Novotny and M. A. Hopper, J. Electrochem. Soc. 126, 925 (1979), 10.1149/1.2129195; P. Kornilovitch and Y. Jeon, J. Appl. Phys. 109, 064509 (2011), 10.1063/1.3554445]. In fact, at sufficiently large voltages (several VT), the current during discharging is no longer monotonic: it displays a "reverse peak" before decaying in magnitude to zero. We analyze the dynamics of discharging by solving the Poisson-Nernst-Planck equations governing ion transport via asymptotic and numerical techniques in three regimes. First, in the "linear regime" when the applied voltage V is formally much less than VT, the charging and discharging currents are antisymmetric in time; however, the potential and charge density profiles during charging and discharging are asymmetric. The current evolution is on the R C timescale of the cell, λDL /D , where L is the width of the cell, D is the diffusivity of ions, and λD is the Debye length. Second, in the (experimentally relevant) thin-double-layer limit ɛ =λD/L ≪1 , there is a "weakly nonlinear" regime defined by VT≲V ≲VTln(1 /ɛ ) , where the bulk salt concentration is uniform; thus the R C timescale of the evolution of the current magnitude persists. However, nonlinear, voltage-dependent, capacitance of the double layer is responsible for a break in temporal antisymmetry of the charging and discharging currents. Third, the reverse peak in the discharging current develops in a "strongly nonlinear" regime V ≳VTln(1 /ɛ ) , driven by neutral salt adsorption into the double layers and consequent bulk depletion during charging. The strongly nonlinear regime features current evolution over three timescales. The current decays in magnitude on the double layer relaxation timescale, λD2/D ; then grows exponentially in time towards the reverse peak on the diffusion timescale, L2/D , indicating that the reverse peak is the results of fast diffusion of ions from the double layer layer to the bulk. Following the reverse peak, the current decays exponentially to zero on the R C timescale. Notably, the current at the reverse peak and the time of the reverse peak saturate at large voltages V ≫VTln(1 /ɛ ) . We provide semi-analytic expressions for the saturated reverse peak time and current, which can be used to infer charge carrier diffusivity and concentration from experiments.

  20. Is the shape of the decline in risk following quitting smoking similar for squamous cell carcinoma and adenocarcinoma of the lung? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2015-06-01

    One possible contributor to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma of the lung may be differences in the pattern of decline in risk following quitting for the two lung cancer types. Earlier, using data from 85 studies comparing overall lung cancer risks in current smokers, quitters (by time quit) and never smokers, we fitted the negative exponential model, deriving an estimate of 9.93years for the half-life - the time when the excess risk for quitters compared to never smokers becomes half that for continuing smokers. Here we applied the same techniques to data from 16 studies providing RRs specific for lung cancer type. From the 13 studies where the half-life was estimable for each type, we derived estimates of 11.68 (95% CI 10.22-13.34) for squamous cell carcinoma and 14.45 (11.92-17.52) for adenocarcinoma. The ratio of the half-lives was estimated as 1.32 (95% CI 1.20-1.46, p<0.001). The slower decline in quitters for adenocarcinoma, evident in subgroups by sex, age and other factors, may be one of the factors contributing to the reported rise in the ratio of adenocarcinoma to squamous cell carcinoma. Others include changes in the diagnosis and classification of lung cancer. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Indirect nitrous oxide emissions from streams within the US Corn Belt scale with stream order

    PubMed Central

    Turner, Peter A.; Griffis, Timothy J.; Lee, Xuhui; Baker, John M.; Venterea, Rodney T.; Wood, Jeffrey D.

    2015-01-01

    N2O is an important greenhouse gas and the primary stratospheric ozone depleting substance. Its deleterious effects on the environment have prompted appeals to regulate emissions from agriculture, which represents the primary anthropogenic source in the global N2O budget. Successful implementation of mitigation strategies requires robust bottom-up inventories that are based on emission factors (EFs), simulation models, or a combination of the two. Top-down emission estimates, based on tall-tower and aircraft observations, indicate that bottom-up inventories severely underestimate regional and continental scale N2O emissions, implying that EFs may be biased low. Here, we measured N2O emissions from streams within the US Corn Belt using a chamber-based approach and analyzed the data as a function of Strahler stream order (S). N2O fluxes from headwater streams often exceeded 29 nmol N2O-N m−2⋅s−1 and decreased exponentially as a function of S. This relation was used to scale up riverine emissions and to assess the differences between bottom-up and top-down emission inventories at the local to regional scale. We found that the Intergovernmental Panel on Climate Change (IPCC) indirect EF for rivers (EF5r) is underestimated up to ninefold in southern Minnesota, which translates to a total tier 1 agricultural underestimation of N2O emissions by 40%. We show that accounting for zero-order streams as potential N2O hotspots can more than double the agricultural budget. Applying the same analysis to the US Corn Belt demonstrates that the IPCC EF5r underestimation explains the large differences observed between top-down and bottom-up emission estimates. PMID:26216994

  2. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  3. Analyzing Double Delays at Newark Liberty International Airport

    NASA Technical Reports Server (NTRS)

    Evans, Antony D.; Lee, Paul

    2016-01-01

    When weather or congestion impacts the National Airspace System, multiple different Traffic Management Initiatives can be implemented, sometimes with unintended consequences. One particular inefficiency that is commonly identified is in the interaction between Ground Delay Programs (GDPs) and time based metering of internal departures, or TMA scheduling. Internal departures under TMA scheduling can take large GDP delays, followed by large TMA scheduling delays, because they cannot be easily fitted into the overhead stream. In this paper we examine the causes of these double delays through an analysis of arrival operations at Newark Liberty International Airport (EWR) from June to August 2010. Depending on how the double delay is defined between 0.3 percent and 0.8 percent of arrivals at EWR experienced double delays in this period. However, this represents between 21 percent and 62 percent of all internal departures in GDP and TMA scheduling. A deep dive into the data reveals that two causes of high internal departure scheduling delays are upstream flights making up time between their estimated departure clearance times (EDCTs) and entry into time based metering, which undermines the sequencing and spacing underlying the flight EDCTs, and high demand on TMA, when TMA airborne metering delays are high. Data mining methods (currently) including logistic regression, support vector machines and K-nearest neighbors are used to predict the occurrence of double delays and high internal departure scheduling delays with accuracies up to 0.68. So far, key indicators of double delay and high internal departure scheduling delay are TMA virtual runway queue size, and the degree to which estimated runway demand based on TMA estimated times of arrival has changed relative to the estimated runway demand based on EDCTs. However, more analysis is needed to confirm this.

  4. SU-E-T-86: A Systematic Method for GammaKnife SRS Fetal Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geneser, S; Paulsson, A; Sneed, P

    Purpose: Estimating fetal dose is critical to the decision-making process when radiation treatment is indicated during pregnancy. Fetal doses less than 5cGy confer no measurable non-cancer developmental risks but can produce a threefold increase in developing childhood cancer. In this study, we estimate fetal dose for a patient receiving Gamma Knife stereotactic radiosurgery (GKSRS) treatment and develop a method to estimate dose directly from plan details. Methods: A patient underwent GKSRS on a Perfexion unit for eight brain metastases (two infratentorial and one brainstem). Dose measurements were performed using a CC13, head phantom, and solid water. Superficial doses to themore » thyroid, sternum, and pelvis were measured using MOSFETs during treatment. Because the fetal dose was too low to accurately measure, we obtained measurements proximally to the isocenter, fitted to an exponential function, and extrapolated dose to the fundus of the uterus, uterine midpoint, and pubic synthesis for both the preliminary and delivered plans. Results: The R-squared fit for the delivered doses was 0.995. The estimated fetal doses for the 72 minute preliminary and 138 minute delivered plans range from 0.0014 to 0.028cGy and 0.07 to 0.38cGy, respectively. MOSFET readings during treatment were just above background for the thyroid and negligible for all inferior positions. The method for estimating fetal dose from plan shot information was within 0.2cGy of the measured values at 14cm cranial to the fetal location. Conclusion: Estimated fetal doses for both the preliminary and delivered plan were well below the 5cGy recommended limit. Due to Pefexion shielding, internal dose is primarily governed by attenuation and drops off exponentially. This is the first work that reports fetal dose for a GK Perfexion unit. Although multiple lesions were treated and the duration of treatment was long, the estimated fetal dose remained very low.« less

  5. The time-course of protection of the RTS,S vaccine against malaria infections and clinical disease.

    PubMed

    Penny, Melissa A; Pemberton-Ross, Peter; Smith, Thomas A

    2015-11-04

    Recent publications have reported follow-up of the RTS,S/AS01 malaria vaccine candidate Phase III trials at 11 African sites for 32 months (or longer). This includes site- and time-specific estimates of incidence and efficacy against clinical disease with four different vaccination schedules. These data allow estimation of the time-course of protection against infection associated with two different ages of vaccination, both with and without a booster dose. Using an ensemble of individual-based stochastic models, each trial cohort in the Phase III trial was simulated assuming many different hypothetical profiles for the vaccine efficacy against infection in time, for both the primary course and boosting dose and including the potential for either exponential or non-exponential decay. The underlying profile of protection was determined by Bayesian fitting of these model predictions to the site- and time-specific incidence of clinical malaria over 32 months (or longer) of follow-up. Using the same stochastic models, projections of clinical efficacy in each of the sites were modelled and compared to available observed trial data. The initial protection of RTS,S immediately following three doses is estimated as providing an efficacy against infection of 65 % (when immunizing infants aged 6-12 weeks old) and 91 % (immunizing children aged 5-17 months old at first vaccination). This protection decays relatively rapidly, with an approximately exponential decay for the 6-12 weeks old cohort (with a half-life of 7.2 months); for the 5-17 months old cohort a biphasic decay with a similar half-life is predicted, with an initial rapid decay followed by a slower decay. The boosting dose was estimated to return protection to an efficacy against infection of 50-55 % for both cohorts. Estimates of clinical efficacy by trial site are consistent with those reported in the trial for all cohorts. The site- and time-specific clinical observations from the RTS,S/AS01 trial data allowed a reasonably precise estimation of the underlying vaccine protection against infection which is consistent with common underlying efficacy and decay rates across the trial sites. This calibration suggests that the decay in efficacy against clinical disease is more rapid than that against infection because of age-shifts in the incidence of disease. The dynamical models predict that clinical effectiveness will continue to decay and that likely effects beyond the time-scale of the trial will be small.

  6. Vector Observation-Aided/Attitude-Rate Estimation Using Global Positioning System Signals

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, F. Landis

    1997-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  7. Piecewise exponential survival times and analysis of case-cohort data.

    PubMed

    Li, Yan; Gail, Mitchell H; Preston, Dale L; Graubard, Barry I; Lubin, Jay H

    2012-06-15

    Case-cohort designs select a random sample of a cohort to be used as control with cases arising from the follow-up of the cohort. Analyses of case-cohort studies with time-varying exposures that use Cox partial likelihood methods can be computer intensive. We propose a piecewise-exponential approach where Poisson regression model parameters are estimated from a pseudolikelihood and the corresponding variances are derived by applying Taylor linearization methods that are used in survey research. The proposed approach is evaluated using Monte Carlo simulations. An illustration is provided using data from the Alpha-Tocopherol, Beta-Carotene Cancer Prevention Study of male smokers in Finland, where a case-cohort study of serum glucose level and pancreatic cancer was analyzed. Copyright © 2012 John Wiley & Sons, Ltd.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbes, G.B.; Drenick, E.J.

    An analysis of the change in total body nitrogen during fasting shows that it declines exponentially, a small fraction being lost rapidly (t/sub 1/2/ of a few days), and the remainder being lost slowly (t/sub 1/2/ of many months). The obese faster loses N, and weight, at a slower relative rate than the nonobese; and the ratio of N loss to weight loss during an extended fast is inversely related to body fat content, being about 20 g/kg in the nonobese and about 10 g/kg in those with body fat burdens of 50 kg or more. The loss of bodymore » N on a low protein-calorie adequate diet can also be described in exponential terms, and this function allows an estimate to be made of the N requirement.« less

  9. Fission and quasifission of composite systems with Z =108 -120 : Transition from heavy-ion reactions involving S and Ca to Ti and Ni ions

    NASA Astrophysics Data System (ADS)

    Kozulin, E. M.; Knyazheva, G. N.; Novikov, K. V.; Itkis, I. M.; Itkis, M. G.; Dmitriev, S. N.; Oganessian, Yu. Ts.; Bogachev, A. A.; Kozulina, N. I.; Harca, I.; Trzaska, W. H.; Ghosh, T. K.

    2016-11-01

    Background: Suppression of compound nucleus formation in the reactions with heavy ions by a quasifission process in dependence on the reaction entrance channel. Purpose: Investigation of fission and quasifission processes in the reactions 36S,48Ca,48Ti , and 64Ni+238U at energies around the Coulomb barrier. Methods: Mass-energy distributions of fissionlike fragments formed in the reaction 48Ti+238U at energies of 247, 258, and 271 MeV have been measured using the double-arm time-of-flight spectrometer CORSET at the U400 cyclotron of the Flerov Laboratory of Nuclear Reactions and compared with mass-energy distributions for the reactions 36S,48Ca,64Ni+238U . Results: The most probable fragment masses as well as total kinetic energies and their dispersions in dependence on the interaction energies have been investigated for asymmetric and symmetric fragments for the studied reactions. The fusion probabilities have been deduced from the analysis of mass-energy distributions. Conclusion: The estimated fusion probability for the reactions S, Ca, Ti, and Ni ions with actinide nuclei shows that it depends exponentially on the mean fissility parameter of the system. For the reactions with actinide nuclei leading to the formation of superheavy elements the fusion probabilities are of several orders of magnitude higher than in the case of cold fusion reactions.

  10. Pollen-mediated gene flow from glyphosate-resistant common waterhemp (Amaranthus rudis Sauer): consequences for the dispersal of resistance genes

    PubMed Central

    Sarangi, Debalin; Tyre, Andrew J.; Patterson, Eric L.; Gaines, Todd A.; Irmak, Suat; Knezevic, Stevan Z.; Lindquist, John L.; Jhala, Amit J.

    2017-01-01

    Gene flow is an important component in evolutionary biology; however, the role of gene flow in dispersal of herbicide-resistant alleles among weed populations is poorly understood. Field experiments were conducted at the University of Nebraska-Lincoln to quantify pollen-mediated gene flow (PMGF) from glyphosate-resistant (GR) to -susceptible (GS) common waterhemp using a concentric donor-receptor design. More than 130,000 common waterhemp plants were screened and 26,199 plants were confirmed resistant to glyphosate. Frequency of gene flow from all distances, directions, and years was estimated with a double exponential decay model using Generalized Nonlinear Model (package gnm) in R. PMGF declined by 50% at <3 m distance from the pollen source, whereas 90% reduction was found at 88 m (maximum) depending on the direction of the pollen-receptor blocks. Amplification of the target site gene, 5-enolpyruvylshikimate-3-phosphate synthase (EPSPS), was identified as the mechanism of glyphosate resistance in parent biotype. The EPSPS gene amplification was heritable in common waterhemp and can be transferred via PMGF, and also correlated with glyphosate resistance in pseudo-F2 progeny. This is the first report of PMGF in GR common waterhemp and the results are critical in explaining the rapid dispersal of GR common waterhemp in Midwestern United States. PMID:28327669

  11. Illuminating the Intricate Details of Tremor and Slow Slip Using an Array of Arrays

    NASA Astrophysics Data System (ADS)

    Creager, K. C.; Vidale, J. E.; Sweet, J. R.; Chestler, S.; Ghosh, A.

    2014-12-01

    Our Array of Arrays experiment consisted of eight 1-km aperture arrays, each containing 10-20 three-component continuously recording stations. One of these arrays ran continuously for five years and the others for more than one year. We applied frequency-domain beam forming to each array, and a multi-beam back projection method to detect and locate tremor on the Cascadia subduction plate interface every minute. We have also used the arrays to detect and locate over 10,000 tiny repeating Low-Frequency Earthquakes (LFEs) in dozens of distinct families. Repeating events are detected by autocorrelating every 6-s window with every other one during many 1-hour periods and stacking them across several stations to find repeating events. Clean templates are built for each family by iteratively scanning for new repeats and stacking them into the previous template. LFE catalogs are obtained by scanning templates through years of continuous data. Waveform similarities across LFEs and across stations within arrays are used to estimate seismic moment, double-difference event locations and source spectra. These methods have revealed fascinating space-time patterns in both tremor and LFEs that shed light on the propagation modes of slow slip earthquakes on the subduction plate interface including tremor streaks that propagate 100 km/hour parallel to relative plate motion, Rapid Tremor Reversals that propagate at 10 km/hour, and up to 4 times variations in the 0.4 km/hour along-strike propagation speed of the main rupture front that indicates sticky spots on the plate interface. Rather than following a standard Gutenberg-Richter power-law relation, the distributions of seismic moment of LFEs within each family follow an exponential law, allowing estimates of characteristic size. LFEs for a given family cluster in time. Going up dip, time between LFE bursts vary systematically from about a week to a year, durations from an hour to several days, and characteristic moment magnitudes from 1.25 to 1.85. The characteristic moment for up dip LFEs is thus 8 times bigger than their down-dip counter parts. Double-difference locations indicate that many of the families occur on patches that are elongated in the direction of relative plate motion, perhaps related to structural features on the plate interface.

  12. Methicillin-resistant Staphylococcus aureus (MRSA) is increasing in Norway: a time series analysis of reported MRSA and methicillin-sensitive S. aureus cases, 1997-2010.

    PubMed

    Moxnes, John F; de Blasio, Birgitte Freiesleben; Leegaard, Truls Michael; Moen, Aina E Fossum

    2013-01-01

    Accurate estimates of the incidence and prevalence of methicillin-resistant Staphylococcus aureus (MRSA) infections are needed to inform public health policies. In Norway, where both MRSA infection and carriage are notifiable conditions, the reported incidence of MRSA is slowly increasing. However, the proportion of MRSA in relation to all S. aureus isolates is unknown, making it difficult to determine if the rising incidence is real or an artifact of an increasing number of tests performed. To characterize recent trends in MRSA infections and obtain a more complete understanding of the MRSA level in Norway. All reported cases of MRSA and methicillin-sensitive S. aureus (MSSA) from Oslo County (1997-2010) and Health Region East (2008-2008), representing approximately 11% and 36% of the Norwegian population, respectively, were analyzed using a stochastic time series analysis to characterize trends. In Oslo County, the proportion of methicillin-resistant cases increased from 0.73% to 3.78% during the study period and was well modeled by an exponential growth with a doubling constant of 5.7 years (95% CI 4.5-7.4 years). In Health Region East, the proportion of MRSA cases increased from 0.4% to 2.1% from 2002 to 2008, with a best-fitting linear increase of 0.26% (95% CI 0.21-0.30%) per year. In both cases, the choice of a linear or exponential model for the time trend produced only marginally different model fits. We found no significant changes due to revised national MRSA guidelines published in June 2009. Significant variations in the increasing time trend were observed in the five hospitals within the region. The yearly reported incidence of MSSA was relatively stable in both study areas although we found seasonal patterns with peaks in August. The level of MRSA is increasing in Norway, and the proportion of methicillin resistance in all S. aureus isolates are higher than the reported proportion of MRSA in invasive infections.

  13. Methicillin-Resistant Staphylococcus aureus (MRSA) Is Increasing in Norway: A Time Series Analysis of Reported MRSA and Methicillin-Sensitive S. aureus Cases, 1997–2010

    PubMed Central

    Moxnes, John F.; de Blasio, Birgitte Freiesleben; Leegaard, Truls Michael; Moen, Aina E. Fossum

    2013-01-01

    Background Accurate estimates of the incidence and prevalence of methicillin-resistant Staphylococcus aureus (MRSA) infections are needed to inform public health policies. In Norway, where both MRSA infection and carriage are notifiable conditions, the reported incidence of MRSA is slowly increasing. However, the proportion of MRSA in relation to all S. aureus isolates is unknown, making it difficult to determine if the rising incidence is real or an artifact of an increasing number of tests performed. Aim To characterize recent trends in MRSA infections and obtain a more complete understanding of the MRSA level in Norway. Methods All reported cases of MRSA and methicillin-sensitive S. aureus (MSSA) from Oslo County (1997–2010) and Health Region East (2008–2008), representing approximately 11% and 36% of the Norwegian population, respectively, were analyzed using a stochastic time series analysis to characterize trends. Results In Oslo County, the proportion of methicillin-resistant cases increased from 0.73% to 3.78% during the study period and was well modeled by an exponential growth with a doubling constant of 5.7 years (95% CI 4.5–7.4 years). In Health Region East, the proportion of MRSA cases increased from 0.4% to 2.1% from 2002 to 2008, with a best-fitting linear increase of 0.26% (95% CI 0.21–0.30%) per year. In both cases, the choice of a linear or exponential model for the time trend produced only marginally different model fits. We found no significant changes due to revised national MRSA guidelines published in June 2009. Significant variations in the increasing time trend were observed in the five hospitals within the region. The yearly reported incidence of MSSA was relatively stable in both study areas although we found seasonal patterns with peaks in August. Conclusion The level of MRSA is increasing in Norway, and the proportion of methicillin resistance in all S. aureus isolates are higher than the reported proportion of MRSA in invasive infections. PMID:23936442

  14. Decomposition rates for hand-piled fuels

    Treesearch

    Clinton S. Wright; Alexander M. Evans; Joseph C. Restaino

    2017-01-01

    Hand-constructed piles in eastern Washington and north-central New Mexico were weighed periodically between October 2011 and June 2015 to develop decay-rate constants that are useful for estimating the rate of piled biomass loss over time. Decay-rate constants (k) were determined by fitting negative exponential curves to time series of pile weight for each site. Piles...

  15. Forecasting Performance of Grey Prediction for Education Expenditure and School Enrollment

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian; Yin, Mu-Shang

    2012-01-01

    GM(1,1) and GM(1,1) rolling models derived from grey system theory were estimated using time-series data from projection studies by National Center for Education Statistics (NCES). An out-of-sample forecasting competition between the two grey prediction models and exponential smoothing used by NCES was conducted for education expenditure and…

  16. Decay of Correlations, Quantitative Recurrence and Logarithm Law for Contracting Lorenz Attractors

    NASA Astrophysics Data System (ADS)

    Galatolo, Stefano; Nisoli, Isaia; Pacifico, Maria Jose

    2018-03-01

    In this paper we prove that a class of skew products maps with non uniformly hyperbolic base has exponential decay of correlations. We apply this to obtain a logarithm law for the hitting time associated to a contracting Lorenz attractor at all the points having a well defined local dimension, and a quantitative recurrence estimation.

  17. A joint sparse representation-based method for double-trial evoked potentials estimation.

    PubMed

    Yu, Nannan; Liu, Haikuan; Wang, Xiaoyan; Lu, Hanbing

    2013-12-01

    In this paper, we present a novel approach to solving an evoked potentials estimating problem. Generally, the evoked potentials in two consecutive trials obtained by repeated identical stimuli of the nerves are extremely similar. In order to trace evoked potentials, we propose a joint sparse representation-based double-trial evoked potentials estimation method, taking full advantage of this similarity. The estimation process is performed in three stages: first, according to the similarity of evoked potentials and the randomness of a spontaneous electroencephalogram, the two consecutive observations of evoked potentials are considered as superpositions of the common component and the unique components; second, making use of their characteristics, the two sparse dictionaries are constructed; and finally, we apply the joint sparse representation method in order to extract the common component of double-trial observations, instead of the evoked potential in each trial. A series of experiments carried out on simulated and human test responses confirmed the superior performance of our method. © 2013 Elsevier Ltd. Published by Elsevier Ltd. All rights reserved.

  18. Three-dimensional characterization of bacterial microcolonies on solid agar-based culture media.

    PubMed

    Drazek, Laurent; Tournoud, Maud; Derepas, Frédéric; Guicherd, Maryse; Mahé, Pierre; Pinston, Frédéric; Veyrieras, Jean-Baptiste; Chatellier, Sonia

    2015-02-01

    For the last century, in vitro diagnostic process in microbiology has mainly relied on the growth of bacteria on the surface of a solid agar medium. Nevertheless, few studies focused in the past on the dynamics of microcolonies growth on agar surface before 8 to 10h of incubation. In this article, chromatic confocal microscopy has been applied to characterize the early development of a bacterial colony. This technology relies on a differential focusing depth of the white light. It allows one to fully measure the tridimensional shape of microcolonies more quickly than classical confocal microscopy but with the same spatial resolution. Placing the device in an incubator, the method was able to individually track colonies growing on an agar plate, and to follow the evolution of their surface or volume. Using an appropriate statistical modeling framework, for a given microorganism, the doubling time has been estimated for each individual colony, as well as its variability between colonies, both within and between agar plates. A proof of concept led on four bacterial strains of four distinct species demonstrated the feasibility and the interest of the approach. It showed in particular that doubling times derived from early tri-dimensional measurements on microcolonies differed from classical measurements in micro-dilutions based on optical diffusion. Such a precise characterization of the tri-dimensional shape of microcolonies in their late-lag to early-exponential phase could be beneficial in terms of in vitro diagnostics. Indeed, real-time monitoring of the biomass available in a colony could allow to run well established microbial identification workflows like, for instance, MALDI-TOF mass-spectrometry, as soon as a sufficient quantity of material is available, thereby reducing the time needed to provide a diagnostic. Moreover, as done for pre-identification of macro-colonies, morphological indicators such as three-dimensional growth profiles derived from microcolonies could be used to perform a first pre-identification step, but in a shorten time. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Influence of Internal Electric Field on the Recombination Dynamics of Localized Excitons in an InGaN Double-Quantum-Well Laser Diode Wafer Operated at 450 nm

    NASA Astrophysics Data System (ADS)

    Onuma, Takeyoshi; Chichibu, Shigefusa F.; Aoyama, Toyomi; Nakajima, Kiyomi; Ahmet, Parhat; Azuhata, Takashi; Chikyow, Toyohiro; Sota, Takayuki; Nagahama, Shin-ichi; Mukai, Takashi

    2003-12-01

    Optical and structural properties of an InGaN double-quantum-well (DQW) laser diode (LD) wafer that lased at 450 nm were investigated to discuss an enormous impact of a polarization-induced electric field on the recombination dynamics in InGaN quantum structures. The quantum-well (QW) structure was shown to have the well thickness as thin as approximately 1 nm and InN molar fraction x of approximately 14%. The gross effective electric field in the QW (FQW) was estimated to be 490 kV/cm from the Franz-Keldysh oscillation (FKO) period in the electroreflectance (ER) spectrum, implying that an internal piezoelectric field (Fpiz) of approximately 1.4 MV/cm was cancelled by the pn junction built-in field (Fbi) and Coulomb screening due to carriers in the DQW. The magnitude of FQW can be further weakened by applying reverse bias (VR) on the junction; the decrease in the photoluminescence (PL) lifetime at low temperature measured under VR was explained to be due to a recovery of electron-hole wavefunction overlap for small VR (|VR|<4 V), and due mainly to the tunneling escape of carriers through the barriers for larger VR. By applying an appropriate VR smaller than 4 V, electron-hole wavefunction overlap, which had been separated vertically along the c-axis due to quantum-confined Stark effect, could be partially recovered, and then the time-resolved PL signals exhibited a less-pronounced stretched exponential decay, giving a scaling parameter (β) of 0.85 and effective in-plane localization depth (E0) of 40-50 meV for the spontaneous emission. These values were closer to those of much homogeneous QWs compared to those reported previously for InGaN QWs having similar InN molar fractions. The use of very thin QWs is considered to bring easier Coulomb screening of FQW and population inversion under high excitation conditions.

  20. A single-phase white light emitting Pr3+ doped Ba2CaWO6 phosphor: synthesis, photoluminescence and optical properties

    NASA Astrophysics Data System (ADS)

    Sreeja, E.; Vidyadharan, Viji; Jose, Saritha K.; George, Anns; Joseph, Cyriac; Unnikrishnan, N. V.; Biju, P. R.

    2018-04-01

    Pr3+ doped Ba2CaWO6 phosphor were prepared by traditional high-temperature solid-state reaction technique. The structure evolution was systematically investigated by X-ray diffraction (XRD), energy-dispersive X-ray spectroscopy (EDS), Fourier-transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM) and transmission electron microscopy (TEM) analysis. The X-ray powder diffraction patterns indicate that the prepared phosphors crystallized in the cubic double-perovskite structure. The functional groups were identified using FTIR spectra and the elements present in the composition were confirmed by the EDS profile. The morphology of the phosphor was identified using SEM and TEM analysis. The PL spectra illustrated that these phosphors could be efficiently excited by charge transfer band of host and the maximum luminescence intensity was observed at 0.06 wt% of Pr3+ ion. Upon the charge transfer band excitation, emission spectra showed peaks at 489, 532, 647, 685 and 737 nm corresponding to 3P0→3H4, 3P1→3H5, 3P0→3F2, 3P0→3F3 and 3P0→3F4 transitions respectively. The concentration quenching of Ba2CaWO6:Pr3+ phosphor can be mainly attributed to dipole-dipole interaction. The CIE coordinates were estimated to be close to the white region. The decay curves are well fitted with double exponential decay models. The standard and modified Judd-Ofelt (JO) theories were used to determine the Judd-Ofelt intensity parameters, radiative transition probabilities and branching ratios. The optical properties indicate that Ba2CaWO6:Pr3+ phosphors can produce white light emission from a single phase host and its potential application for solid-state lighting and display devices.

  1. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  2. Ferromagnetic resonance in coupled permalloy double films separated by a Cu interlayer

    NASA Astrophysics Data System (ADS)

    Maksymowicz, A. Z.; Whiting, J. S. S.; Watson, M. L.; Chambers, A.

    1991-03-01

    Ferromagnetic resonance (FMR) at 16 GHz was used to study the magnetic coupling between two-layers of permalloy separated by a nonmagnetic Cu layer. Samples with the same thickness (600 Å) of both permalloy layers were deposited from e-gun sources onto glass substrates in UHV. The thickness d of the Cu interlayer was varied from 5 to 37 Å. The exchange coupling energy ( E = - KM1· M2) model was used to describe the interaction between the two magnetic layers. It was found from the ferromagnetic resonance data in the perpendicular configuration that K( d) follows an exponential law, K = K0e - d/ q, where q = 9.3 Å.

  3. Instantaneous and dynamical decoherence

    NASA Astrophysics Data System (ADS)

    Polonyi, Janos

    2018-04-01

    Two manifestations of decoherence, called instantaneous and dynamical, are investigated. The former reflects the suppression of the interference between the components of the current state while the latter reflects that within the initial state. These types of decoherence are computed in the case of the Brownian motion and the harmonic and anharmonic oscillators within the semiclassical approximation. A remarkable phenomenon, namely the opposite orientation of the time arrow of the dynamical variables compared to that of the quantum fluctuations generates a double exponential time dependence of the dynamical decoherence in the presence of a harmonic force. For the weakly anharmonic oscillator the dynamical decoherence is found to depend in a singular way on the amount of the anharmonicity.

  4. Universal renormalization-group dynamics at the onset of chaos in logistic maps and nonextensive statistical mechanics

    NASA Astrophysics Data System (ADS)

    Baldovin, F.; Robledo, A.

    2002-10-01

    We uncover the dynamics at the chaos threshold μ∞ of the logistic map and find that it consists of trajectories made of intertwined power laws that reproduce the entire period-doubling cascade that occurs for μ<μ∞. We corroborate this structure analytically via the Feigenbaum renormalization-group (RG) transformation and find that the sensitivity to initial conditions has precisely the form of a q exponential, of which we determine the q index and the q-generalized Lyapunov coefficient λq. Our results are an unequivocal validation of the applicability of the nonextensive generalization of Boltzmann-Gibbs statistical mechanics to critical points of nonlinear maps.

  5. An Analysis of Wave Interactions in Swept-Wing Flows

    NASA Technical Reports Server (NTRS)

    Reed, H. L.

    1984-01-01

    Crossflow instabilities dominate disturbance growth in the leading-edge region of swept wings. Streamwise vortices in a boundary layer strongly influence the behavior of other disturbances. Amplification of crossflow vortices near the leading edge produces a residual spanwise nonuniformity in the mid-chord regions where Tollmien-Schlichting (T-S) waves are strongly amplified. Should the T-S wave undergo double-exponential growth because of this effect, the usual transition prediction methods would fail. The crossflow/Tollmien-Schlichting wave interaction was modeled as a secondary instability. The effects of suction are included, and different stability criteria are examined. The results are applied to laminar flow control wings characteristic of energy-efficient aircraft designs.

  6. Plasmids as stochastic model systems

    NASA Astrophysics Data System (ADS)

    Paulsson, Johan

    2003-05-01

    Plasmids are self-replicating gene clusters present in on average 2-100 copies per bacterial cell. To reduce random fluctuations and thereby avoid extinction, they ubiquitously autoregulate their own synthesis using negative feedback loops. Here I use van Kampen's Ω-expansion for a two-dimensional model of negative feedback including plasmids and ther replication inhibitors. This analytically summarizes the standard perspective on replication control -- including the effects of sensitivity amplification, exponential time-delays and noisy signaling. I further review the two most common molecular sensitivity mechanisms: multistep control and cooperativity. Finally, I discuss more controversial sensitivity schemes, such as noise-enhanced sensitivity, the exploitation of small-number combinatorics and double-layered feedback loops to suppress noise in disordered environments.

  7. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    NASA Astrophysics Data System (ADS)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  8. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  9. The Western Africa ebola virus disease epidemic exhibits both global exponential and local polynomial growth rates.

    PubMed

    Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone

    2015-01-21

    While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks.

  10. The Western Africa Ebola Virus Disease Epidemic Exhibits Both Global Exponential and Local Polynomial Growth Rates

    PubMed Central

    Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone

    2015-01-01

    Background: While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. Methods: We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. Results: We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. Conclusions: The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks. PMID:25685633

  11. Avalanche Analysis from Multielectrode Ensemble Recordings in Cat, Monkey, and Human Cerebral Cortex during Wakefulness and Sleep

    PubMed Central

    Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain

    2012-01-01

    Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053

  12. Estimating the number of double-strand breaks formed during meiosis from partial observation.

    PubMed

    Toyoizumi, Hiroshi; Tsubouchi, Hideo

    2012-12-01

    Analyzing the basic mechanism of DNA double-strand breaks (DSB) formation during meiosis is important for understanding sexual reproduction and genetic diversity. The location and amount of meiotic DSBs can be examined by using a common molecular biological technique called Southern blotting, but only a subset of the total DSBs can be observed; only DSB fragments still carrying the region recognized by a Southern blot probe are detected. With the assumption that DSB formation follows a nonhomogeneous Poisson process, we propose two estimators of the total number of DSBs on a chromosome: (1) an estimator based on the Nelson-Aalen estimator, and (2) an estimator based on a record value process. Further, we compared their asymptotic accuracy.

  13. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  14. Density estimation in wildlife surveys

    USGS Publications Warehouse

    Bart, Jonathan; Droege, Sam; Geissler, Paul E.; Peterjohn, Bruce G.; Ralph, C. John

    2004-01-01

    Several authors have recently discussed the problems with using index methods to estimate trends in population size. Some have expressed the view that index methods should virtually never be used. Others have responded by defending index methods and questioning whether better alternatives exist. We suggest that index methods are often a cost-effective component of valid wildlife monitoring but that double-sampling or another procedure that corrects for bias or establishes bounds on bias is essential. The common assertion that index methods require constant detection rates for trend estimation is mathematically incorrect; the requirement is no long-term trend in detection "ratios" (index result/parameter of interest), a requirement that is probably approximately met by many well-designed index surveys. We urge that more attention be given to defining bird density rigorously and in ways useful to managers. Once this is done, 4 sources of bias in density estimates may be distinguished: coverage, closure, surplus birds, and detection rates. Distance, double-observer, and removal methods do not reduce bias due to coverage, closure, or surplus birds. These methods may yield unbiased estimates of the number of birds present at the time of the survey, but only if their required assumptions are met, which we doubt occurs very often in practice. Double-sampling, in contrast, produces unbiased density estimates if the plots are randomly selected and estimates on the intensive surveys are unbiased. More work is needed, however, to determine the feasibility of double-sampling in different populations and habitats. We believe the tension that has developed over appropriate survey methods can best be resolved through increased appreciation of the mathematical aspects of indices, especially the effects of bias, and through studies in which candidate methods are evaluated against known numbers determined through intensive surveys.

  15. On High-Order Radiation Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1995-01-01

    In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.

  16. Generalized weighted likelihood density estimators with application to finite mixture of exponential family distributions

    PubMed Central

    Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

    2010-01-01

    The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

  17. Double incontinence in a cohort of nulliparous pregnant women.

    PubMed

    Espuña-Pons, Montserrat; Solans-Domènech, Maite; Sánchez, Emília

    2012-11-01

    To estimate the frequency of double incontinence, and to identify associated risk factors during pregnancy and postpartum in previously continent nulliparous women. A cohort study in healthy, nulliparous, continent pregnant women, attending the public healthcare system of Catalonia (Spain) was designed. The field work was conducted during the control visits of pregnancy, at the time of delivery, and in the postpartum. Double incontinence was defined as a situation in which participants reported simultaneously urinary incontinence (UI) and anal incontinence with the help of a self-administered questionnaire. Prevalence rates, and their corresponding confidence intervals (95% CI), were calculated, as well as the impact on daily life. Multivariable logistic regressions in pregnancy and postpartum were estimated to assess the association of potential risk factors. The prevalence rate of double incontinence during pregnancy was 8.6% (95% CI: 7.0-10.3). Age over 35 years, and family history of UI were associated to a higher risk of double incontinence. After delivery, the prevalence rate decreased to 3.5% (95% CI: 2.4-4.6); only 6.7% of women with double incontinence in pregnancy had a persistency of their symptoms in early postpartum. Instrumental vaginal deliveries carried 2.2 times more risk of double incontinence than spontaneous ones. Episiotomy implied a higher risk for double incontinence. Symptoms of double incontinence are prevalent during first pregnancy; age and other intrinsic factors may favor the occurrence of double incontinence throughout gestation, while instrumental delivery and episiotomy increase the risk of double incontinence in the postpartum period. Copyright © 2012 Wiley Periodicals, Inc.

  18. Time vs. Money: A Quantitative Evaluation of Monitoring Frequency vs. Monitoring Duration.

    PubMed

    McHugh, Thomas E; Kulkarni, Poonam R; Newell, Charles J

    2016-09-01

    The National Research Council has estimated that over 126,000 contaminated groundwater sites are unlikely to achieve low ug/L clean-up goals in the foreseeable future. At these sites, cost-effective, long-term monitoring schemes are needed in order to understand the long-term changes in contaminant concentrations. Current monitoring optimization schemes rely on site-specific evaluations to optimize groundwater monitoring frequency. However, when using linear regression to estimate the long-term zero-order or first-order contaminant attenuation rate, the effect of monitoring frequency and monitoring duration on the accuracy and confidence for the estimated attenuation rate is not site-specific. For a fixed number of monitoring events, doubling the time between monitoring events (e.g., changing from quarterly monitoring to semi-annual monitoring) will double the accuracy of estimated attenuation rate. For a fixed monitoring frequency (e.g., semi-annual monitoring), increasing the number of monitoring events by 60% will double the accuracy of the estimated attenuation rate. Combining these two factors, doubling the time between monitoring events (e.g., quarterly monitoring to semi-annual monitoring) while decreasing the total number of monitoring events by 38% will result in no change in the accuracy of the estimated attenuation rate. However, the time required to collect this dataset will increase by 25%. Understanding that the trade-off between monitoring frequency and monitoring duration is not site-specific should simplify the process of optimizing groundwater monitoring frequency at contaminated groundwater sites. © 2016 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  19. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  20. Saturation capability of short phase modulated pulses facilitates the measurement of longitudinal relaxation times of quadrupolar nuclei.

    PubMed

    Makrinich, Maria; Gupta, Rupal; Polenova, Tatyana; Goldbourt, Amir

    The ability of various pulse types, which are commonly applied for distance measurements, to saturate or invert quadrupolar spin polarization has been compared by observing their effect on magnetization recovery curves under magic-angle spinning. A selective central transition inversion pulse yields a bi-exponential recovery for a diamagnetic sample with a spin-3/2, consistent with the existence of two processes: the fluctuations of the electric field gradients with identical single (W 1 ) and double (W 2 ) quantum quadrupolar-driven relaxation rates, and spin exchange between the central transition of one spin and satellite transitions of a dipolar-coupled similar spin. Using a phase modulated pulse, developed for distance measurements in quadrupolar spins (Nimerovsky et al., JMR 244, 2014, 107-113) and suggested for achieving the complete saturation of all quadrupolar spin energy levels, a mono-exponential relaxation model fits the data, compatible with elimination of the spin exchange processes. Other pulses such as an adiabatic pulse lasting one-third of a rotor period, and a two-rotor-period long continuous-wave pulse, both used for distance measurements under special experimental conditions, yield good fits to bi-exponential functions with varying coefficients and time constants due to variations in initial conditions. Those values are a measure of the extent of saturation obtained from these pulses. An empirical fit of the recovery curves to a stretched exponential function can provide general recovery times. A stretching parameter very close to unity, as obtained for a phase modulated pulse but not for other cases, suggests that in this case recovery times and longitudinal relaxation times are similar. The results are experimentally demonstrated for compounds containing 11 B (spin-3/2) and 51 V (spin-7/2). We propose that accurate spin lattice relaxation rates can be measured by a short phase modulated pulse (<1-2ms), similarly to the "true T 1 " measured by saturation with an asynchronous pulse train (Yesinowski, JMR 252, 2015, 135-144). Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Using the negative exponential distribution to quantitatively review the evidence on how rapidly the excess risk of ischaemic heart disease declines following quitting smoking.

    PubMed

    Lee, Peter N; Fry, John S; Hamling, Jan S

    2012-10-01

    No previous review has formally modelled the decline in IHD risk following quitting smoking. From PubMed searches and other sources we identified 15 prospective and eight case-control studies that compared IHD risk in current smokers, never smokers, and quitters by time period of quit, some studies providing separate blocks of results by sex, age or amount smoked. For each of 41 independent blocks, we estimated, using the negative exponential model, the time, H, when the excess risk reduced to half that caused by smoking. Goodness-of-fit to the model was adequate for 35 blocks, others showing a non-monotonic pattern of decline following quitting, with a variable pattern of misfit. After omitting one block with a current smoker RR 1.0, the combined H estimate was 4.40 (95% CI 3.26-5.95) years. There was considerable heterogeneity, H being <2years for 10 blocks and >10years for 12. H increased (p<0.001) with mean age at study start, but not clearly with other factors. Sensitivity analyses allowing for reverse causation, or varying assumed midpoint times for the final open-ended quitting period little affected goodness-of-fit of the combined estimate. The US Surgeon-General's view that excess risk approximately halves after a year's abstinence seems over-optimistic. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  3. Spectral Gap Estimates in Mean Field Spin Glasses

    NASA Astrophysics Data System (ADS)

    Ben Arous, Gérard; Jagannath, Aukosh

    2018-05-01

    We show that mixing for local, reversible dynamics of mean field spin glasses is exponentially slow in the low temperature regime. We introduce a notion of free energy barriers for the overlap, and prove that their existence imply that the spectral gap is exponentially small, and thus that mixing is exponentially slow. We then exhibit sufficient conditions on the equilibrium Gibbs measure which guarantee the existence of these barriers, using the notion of replicon eigenvalue and 2D Guerra Talagrand bounds. We show how these sufficient conditions cover large classes of Ising spin models for reversible nearest-neighbor dynamics and spherical models for Langevin dynamics. Finally, in the case of Ising spins, Panchenko's recent rigorous calculation (Panchenko in Ann Probab 46(2):865-896, 2018) of the free energy for a system of "two real replica" enables us to prove a quenched LDP for the overlap distribution, which gives us a wider criterion for slow mixing directly related to the Franz-Parisi-Virasoro approach (Franz et al. in J Phys I 2(10):1869-1880, 1992; Kurchan et al. J Phys I 3(8):1819-1838, 1993). This condition holds in a wider range of temperatures.

  4. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  5. Estimation of the incubation period of influenza A (H1N1-2009) among imported cases: addressing censoring using outbreak data at the origin of importation.

    PubMed

    Nishiura, Hiroshi; Inaba, Hisashi

    2011-03-07

    Empirical estimates of the incubation period of influenza A (H1N1-2009) have been limited. We estimated the incubation period among confirmed imported cases who traveled to Japan from Hawaii during the early phase of the 2009 pandemic (n=72). We addressed censoring and employed an infection-age structured argument to explicitly model the daily frequency of illness onset after departure. We assumed uniform and exponential distributions for the frequency of exposure in Hawaii, and the hazard rate of infection for the latter assumption was retrieved, in Hawaii, from local outbreak data. The maximum likelihood estimates of the median incubation period range from 1.43 to 1.64 days according to different modeling assumptions, consistent with a published estimate based on a New York school outbreak. The likelihood values of the different modeling assumptions do not differ greatly from each other, although models with the exponential assumption yield slightly shorter incubation periods than those with the uniform exposure assumption. Differences between our proposed approach and a published method for doubly interval-censored analysis highlight the importance of accounting for the dependence of the frequency of exposure on the survival function of incubating individuals among imported cases. A truncation of the density function of the incubation period due to an absence of illness onset during the exposure period also needs to be considered. When the data generating process is similar to that among imported cases, and when the incubation period is close to or shorter than the length of exposure, accounting for these aspects is critical for long exposure times. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. DOUBLE code simulations of emissivities of fast neutrals for different plasma observation view-lines of neutral particle analyzers on the COMPASS tokamak

    NASA Astrophysics Data System (ADS)

    Mitosinkova, K.; Tomes, M.; Stockel, J.; Varju, J.; Stano, M.

    2018-03-01

    Neutral particle analyzers (NPA) measure line-integrated energy spectra of fast neutral atoms escaping the tokamak plasma, which are a product of charge-exchange (CX) collisions of plasma ions with background neutrals. They can observe variations in the ion temperature T i of non-thermal fast ions created by additional plasma heating. However, the plasma column which a fast atom has to pass through must be sufficiently short in comparison with the fast atom’s mean-free-path. Tokamak COMPASS is currently equipped with one NPA installed at a tangential mid-plane port. This orientation is optimal for observing non-thermal fast ions. However, in this configuration the signal at energies useful for T i derivation is lost in noise due to the too long fast atoms’ trajectories. Thus, a second NPA is planned to be connected for the purpose of measuring T i. We analyzed different possible view-lines (perpendicular mid-plane, tangential mid-plane, and top view) for the second NPA using the DOUBLE Monte-Carlo code and compared the results with the performance of the present NPA with tangential orientation. The DOUBLE code provides fast-atoms’ emissivity functions along the NPA view-line. The position of the median of these emissivity functions is related to the location from where the measured signal originates. Further, we compared the difference between the real central T i used as a DOUBLE code input and the T iCX derived from the exponential decay of simulated energy spectra. The advantages and disadvantages of each NPA location are discussed.

  7. Double sampling for stratification: a forest inventory application in the Interior West

    Treesearch

    David C. Chojnacky

    1998-01-01

    This paper documents the use of double sampling for Forest Inventory and Analysis (Forest Service, U.S. Department of Agriculture) inventories in the Interior West. Results show 18 equations describe the entire inventory summarization process for estimating population totals and means, and respective variances. Most equations are for standard use of double sampling,...

  8. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements

    PubMed Central

    2017-01-01

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604

  9. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements.

    PubMed

    Porru, Marcella; Özkan, Leyla

    2017-08-30

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.

  10. SU-E-T-366: Estimation of Whole Body Dose From Cranial Irradiation From C and Perfexion Series Gamma Knife Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, S; Indiana University School of Medicine, Indianapolis, IN, University Hospitals Case Medical Center, Cleaveland, OH; Andersen, A

    2015-06-15

    Purpose: The Leksell Gamma Knife (GK) B & C series contains 201 Cobalt-60 sources with a helmet. The new model, Perfexion uses 192 Cobalt-60 sources without a helmet; using IRIS system for collimation and stereotactic guidance to deliver SRS to brain tumors. Relative dose to extracranial organs at risk (OARs) is measured in phantom in this study for Perfexion and C-series GK. Materials & Methods: Measurements were performed in a Rando anthropomorphic phantom on both systems using a large ion chamber (Keithley-175) for each collimator. The Keithley-175 cc ion chamber was sandwiched between phantom slices at various locations in themore » phantom to correspond to different extracranial OARs (thyroid, heart, kidney, ovary and testis, etc.) The dose measurement was repeated with OSL detectors for each position and collimator. Results: A large variation is observed in the normalized dose between these two systems. The dose beyond the housing falls off exponentially for Perfexion. Dose beyond the C-series GK housing falls off exponentially from 0–20cm then remains relatively constant from 20–40cm and again falls off with distance but less rapidly. The variation of extracranial dose with distance for each collimator is found to be parallel to each other for both systems. Conclusion: Whole body dose is found to vary significantly between these systems. It is important to measure the extracranial dose, especially for young patients. It is estimated that dose falls off exponentially from the GK housing and is about 1% for large collimators at 75 cm. The dose is two-orders of magnitude smaller for the 4mm collimator. However, this small dose for patient may be significant radiologically.« less

  11. Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.

    PubMed

    Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari

    2014-07-01

    [Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.

  12. 15-digit accuracy calculations of Ambartsumian-Chandrasekhar's H-functions for four-term phase functions with the double-exponential formula

    NASA Astrophysics Data System (ADS)

    Kawabata, Kiyoshi

    2018-01-01

    We have established an iterative scheme to calculate with 15-digit accuracy the numerical values of Ambartsumian-Chandrasekhar's H-functions for anisotropic scattering characterized by the four-term phase function: the method incorporates some advantageous features of the iterative procedure of Kawabata (Astrophys. Space Sci. 358:32, 2015) and the double-exponential integration formula (DE-formula) of Takahashi and Mori (Publ. Res. Inst. Math. Sci. Kyoto Univ. 9:721, 1974), which proved highly effective in Kawabata (Astrophys. Space Sci. 361:373, 2016). Actual calculations of the H-functions have been carried out employing 27 selected cases of the phase function, 56 values of the single scattering albedo π0, and 36 values of an angular variable μ(= cosθ), with θ being the zenith angle specifying the direction of incidence and/or emergence of radiation. Partial results obtained for conservative isotropic scattering, Rayleigh scattering, and anisotropic scattering due to a full four-term phase function are presented. They indicate that it is important to simultaneously verify accuracy of the numerical values of the H-functions for μ<0.05, the domain often neglected in tabulation. As a sample application of the isotropic scattering H-function, an attempt is made in Appendix to simulate by iteratively solving the Ambartsumian equation the values of the plane and spherical albedos of a semi-infinite, homogeneous atmosphere calculated by Rogovtsov and Borovik (J. Quant. Spectrosc. Radiat. Transf. 183:128, 2016), who employed their analytical representations for these quantities and the single-term and two-term Henyey-Greenstein phase functions of appreciably high degrees of anisotropy. While our results are in satisfactory agreement with theirs, our procedure is in need of a faster algorithm to routinely deal with problems involving highly anisotropic phase functions giving rise to near-conservative scattering.

  13. Carbon dioxide emissions from semi-arid soils amended with biochar alone or combined with mineral and organic fertilizers.

    PubMed

    Fernández, José M; Nieto, M Aurora; López-de-Sá, Esther G; Gascó, Gabriel; Méndez, Ana; Plaza, César

    2014-06-01

    Semi-arid soils cover a significant area of Earth's land surface and typically contain large amounts of inorganic C. Determining the effects of biochar additions on CO2 emissions from semi-arid soils is therefore essential for evaluating the potential of biochar as a climate change mitigation strategy. Here, we measured the CO2 that evolved from semi-arid calcareous soils amended with biochar at rates of 0 and 20tha(-1) in a full factorial combination with three different fertilizers (mineral fertilizer, municipal solid waste compost, and sewage sludge) applied at four rates (equivalent to 0, 75, 150, and 225kg potentially available Nha(-1)) during 182 days of aerobic incubation. A double exponential model, which describes cumulative CO2 emissions from two active soil C compartments with different turnover rates (one relatively stable and the other more labile), was found to fit very well all the experimental datasets. In general, the organic fertilizers increased the size and decomposition rate of the stable and labile soil C pools. In contrast, biochar addition had no effects on any of the double exponential model parameters and did not interact with the effects ascribed to the type and rate of fertilizer. After 182 days of incubation, soil organic and microbial biomass C contents tended to increase with increasing the application rates of organic fertilizer, especially of compost, whereas increasing the rate of mineral fertilizer tended to suppress microbial biomass. Biochar was found to increase both organic and inorganic C contents in soil and not to interact with the effects of type and rate of fertilizer on C fractions. As a whole, our results suggest that the use of biochar as enhancer of semi-arid soils, either alone or combined with mineral and organic fertilizers, is unlikely to increase abiotic and biotic soil CO2 emissions. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions

    NASA Astrophysics Data System (ADS)

    Le Roy, Robert J.

    2017-01-01

    This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.

  15. Processing of DNA double strand breaks by alternative non-homologous end-joining in hyperacetylated chromatin.

    PubMed

    Manova, Vasilissa; Singh, Satyendra K; Iliakis, George

    2012-08-22

    Mammalian cells employ at least two subpathways of non-homologous end-joining for the repair of ionizing radiation induced DNA double strand breaks: The canonical DNA-PK-dependent form of non-homologous end-joining (D-NHEJ) and an alternative, slowly operating, error-prone backup pathway (B-NHEJ). In contrast to D-NHEJ, which operates with similar efficiency throughout the cell cycle, B-NHEJ operates more efficiently in G2-phase. Notably, B-NHEJ also shows strong and as of yet unexplained dependency on growth activity and is markedly compromised in serum-deprived cells, or in cells that enter the plateau-phase of growth. The molecular mechanisms underpinning this response remain unknown. Since chromatin structure or changes in chromatin structure are prime candidate-B-NHEJ-modulators, we study here the role of chromatin hyperacetylation, either by HDAC2 knockdown or treatment with the HDAC inhibitor TSA, on the repair by B-NHEJ of IR-induced DSBs. siRNA-mediated knockdown of HDAC2 fails to provoke histone hyperacetylation in Lig4-/- MEFs and has no detectable effect on B-NHEJ function. Treatment with TSA that inhibits multiple HDACs causes efficient, reversible chromatin hyperacetylation in Lig4-/- MEFs, as well as in human HCT116 Lig4-/- cells and the human glioma cell line M059K. The IR yield of DSBs in TSA-treated cells remains similar to that of untreated cells despite the expected chromatin relaxation. In addition, chromatin hyperacetylation leaves unchanged repair of DSBs by B-NHEJ in irradiated exponentially growing, or plateau-phase cells. Notably, under the experimental conditions employed here, chromatin hyperacetylation fails to detectably modulate B-NHEJ in M059K cells as well. In summary, the results show that chromatin acetylation or deacetylation does not affect the kinetics of alternative NHEJ in all types of cells examined both in exponentially growing and serum deprived cultures. We conclude that parameters beyond chromatin acetylation determine B-NHEJ efficiency in the plateau-phase of growth.

  16. Uncertainty Quantification using Exponential Epi-Splines

    DTIC Science & Technology

    2013-06-01

    Leibler divergence. The choice of κ in applications can be informed by the fact that the Kullback - Leibler divergence between two normal densities, ϕ1... of ran- dom output quantities of interests. The framework systematically incorporates hard information derived from physics-based sensors, field test ... information , and determines the ‘best’ estimate within that family. Bayesian estima- tion makes use of prior soft information

  17. Effects of Economy Type and Nicotine on the Essential Value of Food in Rats

    ERIC Educational Resources Information Center

    Cassidy, Rachel N.; Dallery, Jesse

    2012-01-01

    The exponential demand equation proposed by Hursh and Silberberg (2008) provides an estimate of the essential value of a good as a function of price. The model predicts that essential value should remain constant across changes in the magnitude of a reinforcer, but may change as a function of motivational operations. In Experiment 1, rats' demand…

  18. The Financial Consequences of Silence: The Lack of Dialogue on Community College Fundraising Websites

    ERIC Educational Resources Information Center

    Bucci, David A.; Waters, Richard D.

    2014-01-01

    Donations made to the American nonprofit sector have grown exponentially in the past 10 years to an estimated $298.4 billion dollars in 2011 (Giving USA Foundation, 2012). Although consultants offer many different solutions as to how organizations can best capitalize on the e-philanthropy trend, communication theory may offer some of the most…

  19. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  20. Photoacoustic signal attenuation analysis for the assessment of thin layers thickness in paintings

    NASA Astrophysics Data System (ADS)

    Tserevelakis, George J.; Dal Fovo, Alice; Melessanaki, Krystalia; Fontana, Raffaella; Zacharakis, Giannis

    2018-03-01

    This study introduces a novel method for the thickness estimation of thin paint layers in works of art, based on photoacoustic signal attenuation analysis (PAcSAA). Ad hoc designed samples with acrylic paint layers (Primary Red Magenta, Cadmium Yellow, Ultramarine Blue) of various thicknesses on glass substrates were realized for the specific application. After characterization by Optical Coherence Tomography imaging, samples were irradiated at the back side using low energy nanosecond laser pulses of 532 nm wavelength. Photoacoustic waves undergo a frequency-dependent exponential attenuation through the paint layer, before being detected by a broadband ultrasonic transducer. Frequency analysis of the recorded time-domain signals allows for the estimation of the average transmitted frequency function, which shows an exponential decay with the layer thickness. Ultrasonic attenuation models were obtained for each pigment and used to fit the data acquired on an inhomogeneous painted mock-up simulating a real canvas painting. Thickness evaluation through PAcSAA resulted in excellent agreement with cross-section analysis with a conventional brightfield microscope. The results of the current study demonstrate the potential of the proposed PAcSAA method for the non-destructive stratigraphic analysis of painted artworks.

  1. Charge relaxation and dynamics in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Kwok, H. L.

    2006-08-01

    Charge relaxation in dispersive materials is often described in terms of the stretched exponential function (Kohlrausch law). The process can be explained using a "hopping" model which in principle, also applies to charge transport such as current conduction. This work analyzed reported transient photoconductivity data on functionalized pentacene single crystals using a geometric hopping model developed by B. Sturman et al and extracted values (or range of values) on the materials parameters relevant to charge relaxation as well as charge transport. Using the correlated disorder model (CDM), we estimated values of the carrier mobility for the pentacene samples. From these results, we observed the following: i) the transport site density appeared to be of the same order of magnitude as the carrier density; ii) it was possible to extract lower bound values on the materials parameters linked to the transport process; and iii) by matching the simulated charge decay to the transient photoconductivity data, we were able to refine estimates on the materials parameters. The data also allowed us to simulate the stretched exponential decay. Our observations suggested that the stretching index and the carrier mobility were related. Physically, such interdependence would allow one to demarcate between localized molecular interactions and distant coulomb interactions.

  2. DICOM structured report to track patient's radiation dose to organs from abdominal CT exam

    NASA Astrophysics Data System (ADS)

    Morioka, Craig; Turner, Adam; McNitt-Gray, Michael; Zankl, Maria; Meng, Frank; El-Saden, Suzie

    2011-03-01

    The dramatic increase of diagnostic imaging capabilities over the past decade has contributed to increased radiation exposure to patient populations. Several factors have contributed to the increase in imaging procedures: wider availability of imaging modalities, increase in technical capabilities, rise in demand by patients and clinicians, favorable reimbursement, and lack of guidelines to control utilization. The primary focus of this research is to provide in depth information about radiation doses that patients receive as a result of CT exams, with the initial investigation involving abdominal CT exams. Current dose measurement methods (i.e. CTDIvol Computed Tomography Dose Index) do not provide direct information about a patient's organ dose. We have developed a method to determine CTDIvol normalized organ doses using a set of organ specific exponential regression equations. These exponential equations along with measured CTDIvol are used to calculate organ dose estimates from abdominal CT scans for eight different patient models. For each patient, organ dose and CTDIvol were estimated for an abdominal CT scan. We then modified the DICOM Radiation Dose Structured Report (RDSR) to store the pertinent patient information on radiation dose to their abdominal organs.

  3. Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model

    NASA Technical Reports Server (NTRS)

    Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)

    1982-01-01

    The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.

  4. Fresh and Dry Mass Estimates of Hermetia illucens (Linnaeus, 1758) (Diptera: Stratiomyidae) Larvae Associated with Swine Decomposition in Urban Area of Central Amazonia.

    PubMed

    Barros, L M; Martins, R T; Ferreira-Keppler, R L; Gutjahr, A L N

    2017-08-04

    Information on biomass is substantial for calculating growth rates and may be employed in the medicolegal and economic importance of Hermetia illucens (Linnaeus, 1758). Although biomass is essential to understanding many ecological processes, it is not easily measured. Biomass may be determined by directly weighing or indirectly through regression models of fresh/dry mass versus body dimensions. In this study, we evaluated the association between morphometry and fresh/dry mass of immature H. illucens using linear, exponential, and power regression models. We measured width and length of the cephalic capsule, overall body length, and width of the largest abdominal segment of 280 larvae. Overall body length and width of the largest abdominal segment were the best predictors for biomass. Exponential models best fitted body dimensions and biomass (both fresh and dry), followed by power and linear models. In all models, fresh and dry biomass were strongly correlated (>75%). Values estimated by the models did not differ from observed ones, and prediction power varied from 27 to 79%. Accordingly, the correspondence between biomass and body dimensions should facilitate and motivate the development of applied studies involving H. illucens in the Amazon region.

  5. Life of LED-Based White Light Sources

    NASA Astrophysics Data System (ADS)

    Narendran, Nadarajah; Gu, Yimin

    2005-09-01

    Even though light-emitting diodes (LEDs) may have a very long life, poorly designed LED lighting systems can experience a short life. Because heat at the p-n-junction is one of the main factors that affect the life of the LED, by knowing the relationship between life and heat, LED system manufacturers can design and build long-lasting systems. In this study, several white LEDs from the same manufacturer were subjected to life tests at different ambient temperatures. The exponential decay of light output as a function of time provided a convenient method to rapidly estimate life by data extrapolation. The life of these LEDs decreases in an exponential manner with increasing temperature. In a second experiment,several high-power white LEDs from different manufacturers were life-tested under similar conditions. Results show that the different products have significantly different life values.

  6. Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.

    PubMed

    Brette, Romain; Gerstner, Wulfram

    2005-11-01

    We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.

  7. Numerical investigation of MHD flow with Soret and Dufour effect

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Nasir, Tehreem; Khan, Muhammad Ijaz; Alsaedi, Ahmed

    2018-03-01

    This paper describes the flow due to an exponentially curved surface subject to Soret and Dufour effects. Nonlinear velocity is considered. Exponentially curved stretchable sheet induced the flow. Fluid is electrical conducting through constant applied magnetic field. The governing flow expressions are reduced to ordinary ones and then tackled by numerical technique (Built-in-Shooting). Impacts of various flow variables on the dimensionless velocity, concentration and temperature fields are graphically presented and discussed in detail. Skin friction coefficient and Sherwood and Nusselt numbers are studied through graphs. Furthermore it is observed that Soret and Dufour variables regulate heat and mass transfer rates. It is also noteworthy that velocity decays for higher magnetic variable. Skin friction magnitude decays via curvature and magnetic variables. Also mass transfer gradient or rate of mass transport enhances for higher estimations of curvature parameter and Schmidt number.

  8. Exponential instability in the fractional Calderón problem

    NASA Astrophysics Data System (ADS)

    Rüland, Angkana; Salo, Mikko

    2018-04-01

    In this paper we prove the exponential instability of the fractional Calderón problem and thus prove the optimality of the logarithmic stability estimate from Rüland and Salo (2017 arXiv:1708.06294). In order to infer this result, we follow the strategy introduced by Mandache in (2001 Inverse Problems 17 1435) for the standard Calderón problem. Here we exploit a close relation between the fractional Calderón problem and the classical Poisson operator. Moreover, using the construction of a suitable orthonormal basis, we also prove (almost) optimality of the Runge approximation result for the fractional Laplacian, which was derived in Rüland and Salo (2017 arXiv:1708.06294). Finally, in one dimension, we show a close relation between the fractional Calderón problem and the truncated Hilbert transform.

  9. Effects of Nitroglycerin on Regional Myocardial Blood Flow in Coronary Artery Disease

    PubMed Central

    Horwitz, Lawrence D.; Gorlin, Richard; Taylor, Warren J.; Kemp, Harvey G.

    1971-01-01

    Regional myocardial blood flow before and after sublingual nitroglycerin was measured in 10 patients with coronary artery disease. During thoracotomy, 133Xe was injected directly into the subepicardium in diseased regions of the anterior left ventricular wall, and washout rates were recorded with a scintillation counter. All disappearance curves were closely approximated by two exponential decays analyzed as two parallel flow systems by the compartmental method. The appearance of a double exponential decay pattern in diseased regions suggests that the slow phase was associated with collateral blood flow, although nonhomogeneous myocardium-to-blood partition coefficients for xenon cannot be excluded. Nitroglycerin increased the rapid phase flow in 9 of 10 patients and the slow flow in 7 of 10 patients. Average flow increased in 9 of the 10 patients (P < 0.01). Mean rapid phase flow in the control state was 110 ml/100 g per min and after nitroglycerin increased to 132 ml/100 g per min (P < 0.01); slow phase flow increased from 12 ml/100 g per min to 15 ml/100 g per min (P < 0.05). It is concluded that, under these conditions, nitroglycerin improves perfusion in regions of diseased myocardium in patients with coronary artery disease. PMID:4999635

  10. Stochastic sampled-data control for synchronization of complex dynamical networks with control packet loss and additive time-varying delays.

    PubMed

    Rakkiyappan, R; Sakthivel, N; Cao, Jinde

    2015-06-01

    This study examines the exponential synchronization of complex dynamical networks with control packet loss and additive time-varying delays. Additionally, sampled-data controller with time-varying sampling period is considered and is assumed to switch between m different values in a random way with given probability. Then, a novel Lyapunov-Krasovskii functional (LKF) with triple integral terms is constructed and by using Jensen's inequality and reciprocally convex approach, sufficient conditions under which the dynamical network is exponentially mean-square stable are derived. When applying Jensen's inequality to partition double integral terms in the derivation of linear matrix inequality (LMI) conditions, a new kind of linear combination of positive functions weighted by the inverses of squared convex parameters appears. In order to handle such a combination, an effective method is introduced by extending the lower bound lemma. To design the sampled-data controller, the synchronization error system is represented as a switched system. Based on the derived LMI conditions and average dwell-time method, sufficient conditions for the synchronization of switched error system are derived in terms of LMIs. Finally, numerical example is employed to show the effectiveness of the proposed methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stagi, Luigi, E-mail: luigi.stagi@dsf.unica.it; Chiriu, Daniele; Carbonaro, Carlo M.

    The phenomenon of luminescence enhancement was studied in melamine-Y{sub 2}O{sub 3}:Tb hybrids. Terbium doped Y{sub 2}O{sub 3} mesoporous nanowires were synthesized by hydrothermal method. X-ray diffraction patterns and Raman scattering spectra testified the realization of a cubic crystal phase. Organic-inorganic melamine-Y{sub 2}O{sub 3}:Tb{sup 3+} hybrid system was successfully obtained by vapour deposition method. Vibration Raman active modes of the organic counterpart were investigated in order to verify the achievement of hybrid system. Photoluminescence excitation and photoluminescence spectra, preformed in the region between 250 and 350 nm, suggest a strong interaction among melamine and Terbium ions. In particular, a remarkable improvement ofmore » {sup 5}D{sub 4}→ F{sub J} Rare Earth emission (at about 542 nm) of about 10{sup 2} fold was observed and attributed to an efficient organic-Tb energy transfer. The energy transfer mechanism was studied by the use of time resolved photoluminescence measurements. The melamine lifetime undergoes to a significant decrease when adsorbed to oxide surfaces and it was connected to a sensitization mechanism. The detailed analysis of time decay profile of Terbium radiative recombination shows a variation of double exponential law toward a single exponential one. Its correlation with surface defects and non-radiative recombination was thus discussed.« less

  12. Profiler - A Fast and Versatile New Program for Decomposing Galaxy Light Profiles

    NASA Astrophysics Data System (ADS)

    Ciambur, Bogdan C.

    2016-12-01

    I introduce Profiler, a user-friendly program designed to analyse the radial surface brightness profiles of galaxies. With an intuitive graphical user interface, Profiler can accurately model galaxies of a broad range of morphological types, with various parametric functions routinely employed in the field (Sérsic, core-Sérsic, exponential, Gaussian, Moffat, and Ferrers). In addition to these, Profiler can employ the broken exponential model for disc truncations or anti-truncations, and two special cases of the edge-on disc model: along the disc's major or minor axis. The convolution of (circular or elliptical) models with the point spread function is performed in 2D, and offers a choice between Gaussian, Moffat or a user-provided profile for the point spread function. Profiler is optimised to work with galaxy light profiles obtained from isophotal measurements, which allow for radial gradients in the geometric parameters of the isophotes, and are thus often better at capturing the total light than 2D image-fitting programs. Additionally, the 1D approach is generally less computationally expensive and more stable. I demonstrate Profiler's features by decomposing three case-study galaxies: the cored elliptical galaxy NGC 3348, the nucleated dwarf Seyfert I galaxy Pox 52, and NGC 2549, a double-barred galaxy with an edge-on, truncated disc.

  13. Protochlorophyll complexes with similar steady-state fluorescence characteristics can differ in fluorescence lifetimes. A model study in Triton X-100.

    PubMed

    Myśliwa-Kurdziel, Beata; Solymosi, Katalin; Kruk, Jerzy; Böddi, Béla; Strzałka, Kazimierz

    2007-03-01

    The steady-state and time-resolved fluorescence characteristics of protochlorophyll (Pchl) dissolved in neat Triton X-100 and in Triton X-100 micelles were investigated, and the fluorescence lifetimes of different Pchl spectral forms were studied. Varying the concentration of Pchl or diluting the micellar solutions either with a buffer or with a micellar solution, 631-634, 645-655, 680-692 and above 700 nm emitting Pchl complexes were prepared, the ratios of which varied from one another. The fluorescence decay of the 631-634 nm emitting (monomeric) form had a mono-exponential character with a 5.4-ns fluorescence lifetime. The long-wavelength Pchl complexes (aggregates) had two fluorescence lifetime values within a range of 1.4-3.9 ns and 0.15-0.84 ns, which showed high variability in different environments. Depending on the conditions, either mono- or double-exponential fluorescence decay was found for a fluorescence band at 680-685 nm. These data show that despite their very similar steady-state fluorescence properties, Pchl complexes can differ in fluorescence lifetimes, which may reflect different molecular structures, intrinsic geometries or different molecular interactions. This underlines the importance of complex spectroscopic analysis for a precise description of native and artificial chlorophyllous pigment forms.

  14. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  15. Scattering of plane evanescent waves by buried cylinders: Modeling the coupling to guided waves and resonances

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.

    2003-04-01

    The coupling of sound to buried targets can be associated with acoustic evanescent waves when the sea bottom is smooth. To understand the excitation of guided waves on buried fluid cylinders and shells by acoustic evanescent waves and the associated target resonances, the two-dimensional partial wave series for the scattering is found for normal incidence in an unbounded medium. The shell formulation uses the simplifications of thin-shell dynamics. The expansion of the incident wave becomes a double summation with products of modified and ordinary Bessel functions [P. L. Marston, J. Acoust. Soc. Am. 111, 2378 (2002)]. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on depth. Some consequences of this imbalance of partial-wave amplitudes are given by modifying previous ray theory for the scattering [P. L. Marston and N. H. Sun, J. Acoust. Soc. Am. 97, 777-783 (1995)]. The exponential dependence of the scattering on the location of a scatterer was previously demonstrated in air [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].

  16. Attitude/attitude-rate estimation from GPS differential phase measurements using integrated-rate parameters

    NASA Technical Reports Server (NTRS)

    Oshman, Yaakov; Markley, Landis

    1998-01-01

    A sequential filtering algorithm is presented for attitude and attitude-rate estimation from Global Positioning System (GPS) differential carrier phase measurements. A third-order, minimal-parameter method for solving the attitude matrix kinematic equation is used to parameterize the filter's state, which renders the resulting estimator computationally efficient. Borrowing from tracking theory concepts, the angular acceleration is modeled as an exponentially autocorrelated stochastic process, thus avoiding the use of the uncertain spacecraft dynamic model. The new formulation facilitates the use of aiding vector observations in a unified filtering algorithm, which can enhance the method's robustness and accuracy. Numerical examples are used to demonstrate the performance of the method.

  17. A recursive solution for a fading memory filter derived from Kalman filter theory

    NASA Technical Reports Server (NTRS)

    Statman, J. I.

    1986-01-01

    A simple recursive solution for a class of fading memory tracking filters is presented. A fading memory filter provides estimates of filter states based on past measurements, similar to a traditional Kalman filter. Unlike a Kalman filter, an exponentially decaying weight is applied to older measurements, discounting their effect on present state estimates. It is shown that Kalman filters and fading memory filters are closely related solutions to a general least squares estimator problem. Closed form filter transfer functions are derived for a time invariant, steady state, fading memory filter. These can be applied in loop filter implementation of the Deep Space Network (DSN) Advanced Receiver carrier phase locked loop (PLL).

  18. Asynchronous State Estimation for Discrete-Time Switched Complex Networks With Communication Constraints.

    PubMed

    Zhang, Dan; Wang, Qing-Guo; Srinivasan, Dipti; Li, Hongyi; Yu, Li

    2018-05-01

    This paper is concerned with the asynchronous state estimation for a class of discrete-time switched complex networks with communication constraints. An asynchronous estimator is designed to overcome the difficulty that each node cannot access to the topology/coupling information. Also, the event-based communication, signal quantization, and the random packet dropout problems are studied due to the limited communication resource. With the help of switched system theory and by resorting to some stochastic system analysis method, a sufficient condition is proposed to guarantee the exponential stability of estimation error system in the mean-square sense and a prescribed performance level is also ensured. The characterization of the desired estimator gains is derived in terms of the solution to a convex optimization problem. Finally, the effectiveness of the proposed design approach is demonstrated by a simulation example.

  19. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  20. Sliding mode control based on Kalman filter dynamic estimation of battery SOC

    NASA Astrophysics Data System (ADS)

    He, Dongmeia; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    Lithium-ion battery charge state of the accurate and rapid estimation of battery management system is the key technology. In this paper, an exponentially reaching law sliding-mode variable structure control algorithm based on Kalman filter is proposed to estimate the state of charge of Li-ion battery for the dynamic nonlinear system. The RC equivalent circuit model is established, and the model equation with specific structure is given. The proposed Kalman filter sliding mode structure is used to estimate the state of charge of the battery in the battery model, and the jitter effect can be avoided and the estimation performance can be improved. The simulation results show that the proposed Kalman filter sliding mode control has good accuracy in estimating the state of charge of the battery compared with the ordinary Kalman filter, and the error range is within 3%.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uresk, D.W.; Gilbert, R.O.; Rickard, W.H.

    Big sagebrush (Artemisia tridentata) was subjected to a double sampling procedure to obtain reliable phytomass estimates for leaves, flowering stalks, live wood, dead wood, various combinations of the preceeding, and total phytomass. Coefficients of determination (R/sup 2/) between the independent variable and various phytomass categories ranged from 0.45 to 0.93. Total phytomass was approximately 69 +- 16 (+- S.E.) g/m/sup 2/. Reductions in the variance of the phytomass estimates ranged from 33 percent to 80 percent using double sampling assuming optimum allocation. (auth)

  2. Disentangling inhibition-based and retrieval-based aftereffects of distractors: Cognitive versus motor processes.

    PubMed

    Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian

    2018-05-01

    Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Physiological responses at five estimates of critical velocity.

    PubMed

    Bull, Anthony J; Housh, Terry J; Johnson, Glen O; Rana, Sharon R

    2008-04-01

    The purpose of this study was to compare critical velocity (CV) estimates from five mathematical models, and to examine the oxygen uptake (VO(2)) and heart rate (HR) responses during treadmill runs at the five estimates of CV. Ten subjects (six males and four females) performed one incremental test to determine maximal oxygen consumption (VO(2max)) and four or five randomly ordered constant-velocity trials on a treadmill for the estimation of CV. Five mathematical models were used to estimate CV for each subject including two linear, two nonlinear, and an exponential model. Up to five randomly ordered runs to exhaustion were performed by each subject at treadmill velocities that corresponded to the five CV estimates, and VO(2) and HR responses were monitored throughout each trial. The 3-parameter, nonlinear (Non-3) model produced CV estimates that were significantly (P < 0.05) less than the other four models. During runs at CV estimates, five subjects did not complete 60 min at the their estimate from the Non-3 model, nine did not complete 60 min at their estimate from the Non-2 model, and no subjects completed 60 min at any estimate from the other three models. The mean HR value (179 +/- 18 beats min(-1), HR(peak)) at the end of runs at CV using the Non-3 model was significantly less than the maximal HR (195 +/- 7 beats min(-1), HR(max)) achieved during the incremental trial to exhaustion. However, mean HR(peak) values from runs at all other CV estimates were not significantly different from HR(max). Furthermore, data indicated that mean HR(peak) values increased during runs at CV estimates from the third minute to the end of exercise for all models, and that these increases in VO(2) (range = 367-458 ml min(-1)) were significantly greater than that typically associated with O(2) drift ( approximately 200 ml min(-1)) for all but the exponential model, indicating a VO(2) slow component associated with CV estimates from four of the five models. However, the mean VO(2) values at the end of exercise during the runs at CV estimates for all five mathematical models were significantly less than the mean VO(2max) value. These results suggest that, in most cases, CV estimated from the five models does not represent a fatigueless task. In addition, the mean CV estimates from the five models varied by 18%, and four of the five mean CV estimates were within the heavy exercise domain. Therefore, CV would not represent the demarcation point between heavy and severe exercise domains.

  4. Estimation of heart rate and heart rate variability from pulse oximeter recordings using localized model fitting.

    PubMed

    Wadehn, Federico; Carnal, David; Loeliger, Hans-Andrea

    2015-08-01

    Heart rate variability is one of the key parameters for assessing the health status of a subject's cardiovascular system. This paper presents a local model fitting algorithm used for finding single heart beats in photoplethysmogram recordings. The local fit of exponentially decaying cosines of frequencies within the physiological range is used to detect the presence of a heart beat. Using 42 subjects from the CapnoBase database, the average heart rate error was 0.16 BPM and the standard deviation of the absolute estimation error was 0.24 BPM.

  5. Parameter Transient Behavior Analysis on Fault Tolerant Control System

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine (Technical Monitor); Shin, Jong-Yeob

    2003-01-01

    In a fault tolerant control (FTC) system, a parameter varying FTC law is reconfigured based on fault parameters estimated by fault detection and isolation (FDI) modules. FDI modules require some time to detect fault occurrences in aero-vehicle dynamics. This paper illustrates analysis of a FTC system based on estimated fault parameter transient behavior which may include false fault detections during a short time interval. Using Lyapunov function analysis, the upper bound of an induced-L2 norm of the FTC system performance is calculated as a function of a fault detection time and the exponential decay rate of the Lyapunov function.

  6. Analytical model for release calculations in solid thin-foils ISOL targets

    NASA Astrophysics Data System (ADS)

    Egoriti, L.; Boeckx, S.; Ghys, L.; Houngbo, D.; Popescu, L.

    2016-10-01

    A detailed analytical model has been developed to simulate isotope-release curves from thin-foils ISOL targets. It involves the separate modeling of diffusion and effusion inside the target. The former has been modeled using both first and second Fick's law. The latter, effusion from the surface of the target material to the end of the ionizer, was simulated with the Monte Carlo code MolFlow+. The calculated delay-time distribution for this process was then fitted using a double-exponential function. The release curve obtained from the convolution of diffusion and effusion shows good agreement with experimental data from two different target geometries used at ISOLDE. Moreover, the experimental yields are well reproduced when combining the release fraction with calculated in-target production.

  7. Multiple relaxations of the cluster surface diffusion in a homoepitaxial SrTiO3 layer

    NASA Astrophysics Data System (ADS)

    Woo, Chang-Su; Chu, Kanghyun; Song, Jong-Hyun; Yang, Chan-Ho

    2018-03-01

    We examine the surface diffusion process of adatomic clusters on a (001)-oriented SrTiO3 single crystal using reflection high energy electron diffraction (RHEED). We find that the recovery curve of the RHEED intensity acquired after a homoepitaxial half-layer growth can be accurately fit into a double exponential function, indicating the existence of two dominant relaxation mechanisms. The characteristic relaxation times at selected growth temperatures are investigated to determine the diffusion activation barriers of 0.67 eV and 0.91 eV, respectively. The Monte Carlo simulation of the cluster hopping model suggests that the decrease in the number of dimeric and trimeric clusters during surface diffusion is the origin of the observed relaxation phenomena.

  8. Infrared photorefractive effect in doped KNbO3 crystals

    NASA Astrophysics Data System (ADS)

    Medrano, C.; Zgonik, M.; Liakatas, I.; Günter, P.

    1996-11-01

    The photorefractive sensitivity of potassium niobate crystals doped with Ce, Co, Cu, Fe, Mn, Ni, and Rh and double-doped with Mn and Rh is investigated over an extended spectral range. We present experimental evidence on extrinsic properties important for the photorefractive effect, such as absorption and effective trap density. Photorefractive gratings are investigated with two-wave mixing experiments. Results on exponential gain, response time, and photorefractive sensitivity at near-infrared wavelengths are reported. The best photorefractive sensitivities at 860 and 1064 nm were obtained in crystals doped with Rh, Fe, Mn, and Mn-Rh. This makes them suitable for applications at laser-diode wavelengths; at 1064 nm, however, Rh:KNbO3 shows a better photorefractive sensitivity than the others. .

  9. Quantum group structure and local fields in the algebraic approach to 2D gravity

    NASA Astrophysics Data System (ADS)

    Schnittger, J.

    1995-07-01

    This review contains a summary of the work by J.-L. Gervais and the author on the operator approach to 2d gravity. Special emphasis is placed on the construction of local observables — the Liouville exponentials and the Liouville field itself — and the underlying algebra of chiral vertex operators. The double quantum group structure arising from the presence of two screening charges is discussed and the generalized algebra and field operators are derived. In the last part, we show that our construction gives rise to a natural definition of a quantum tau function, which is a noncommutative version of the classical group-theoretic representation of the Liouville fields by Leznov and Saveliev.

  10. CleAir Monitoring System for Particulate Matter: A Case in the Napoleonic Museum in Rome

    PubMed Central

    Bonacquisti, Valerio; Di Michele, Marta; Frasca, Francesca; Chianese, Angelo; Siani, Anna Maria

    2017-01-01

    Monitoring the air particulate concentration both outdoors and indoors is becoming a more relevant issue in the past few decades. An innovative, fully automatic, monitoring system called CleAir is presented. Such a system wants to go beyond the traditional technique (gravimetric analysis), allowing for a double monitoring approach: the traditional gravimetric analysis as well as the optical spectroscopic analysis of the scattering on the same filters in steady-state conditions. The experimental data are interpreted in terms of light percolation through highly scattering matter by means of the stretched exponential evolution. CleAir has been applied to investigate the daily distribution of particulate matter within the Napoleonic Museum in Rome as a test case. PMID:28892016

  11. Early stages of Ostwald ripening

    NASA Astrophysics Data System (ADS)

    Shneidman, Vitaly A.

    2013-07-01

    The Becker-Döring (BD) nucleation equation is known to predict a narrow double-exponential front (DEF) in the distribution of growing particles over sizes, which is due to early transient effects. When mass conservation is included, nucleation is eventually exhausted while independent growth is replaced by ripening. Despite the enormous difference in the associated time scales, and the resulting demand on numerics, within the generalized BD model the early DEF is shown to be crucial for the selection of the unique self-similar Lifshitz-Slyozov-Wagner asymptotic regime. Being preserved till the latest stages of growth, the DEF provides a universal part of the initial conditions for the ripening problem, regardless of the mass exchange mechanism between the nucleus and the matrix.

  12. Temperature dependence of negative bias under illumination stress and recovery in amorphous indium gallium zinc oxide thin film transistors

    NASA Astrophysics Data System (ADS)

    Hossain Chowdhury, Md Delwar; Migliorato, Piero; Jang, Jin

    2013-04-01

    We have investigated the temperature dependence of negative bias under illumination stress and recovery. The transfer characteristics exhibits a non-rigid shift towards negative gate voltages. For both stress and recovery, the voltage shift in deep depletion is twice that in accumulation. The results support the mechanism we previously proposed, which is creation and annealing of a double donor, likely to be an oxygen vacancy. The time dependence of stress and recovery can be fitted to stretched exponentials. Both processes are thermally activated with activation energies 1.06 eV and 1.25 eV for stress and recovery, respectively. A potential energy diagram is proposed to explain the results.

  13. Urbanisation and infectious diseases in a globalised world.

    PubMed

    Alirol, Emilie; Getaz, Laurent; Stoll, Beat; Chappuis, François; Loutan, Louis

    2011-02-01

    The world is becoming urban. The UN predicts that the world's urban population will almost double from 3·3 billion in 2007 to 6·3 billion in 2050. Most of this increase will be in developing countries. Exponential urban growth is having a profound effect on global health. Because of international travel and migration, cities are becoming important hubs for the transmission of infectious diseases, as shown by recent pandemics. Physicians in urban environments in developing and developed countries need to be aware of the changes in infectious diseases associated with urbanisation. Furthermore, health should be a major consideration in town planning to ensure urbanisation works to reduce the burden of infectious diseases in the future. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Limits on silicon nanoelectronics for terascale integration.

    PubMed

    Meindl, J D; Chen, Q; Davis, J A

    2001-09-14

    Throughout the past four decades, silicon semiconductor technology has advanced at exponential rates in both performance and productivity. Concerns have been raised, however, that the limits of silicon technology may soon be reached. Analysis of fundamental, material, device, circuit, and system limits reveals that silicon technology has an enormous remaining potential to achieve terascale integration (TSI) of more than 1 trillion transistors per chip. Such massive-scale integration is feasible assuming the development and economical mass production of double-gate metal-oxide-semiconductor field effect transistors with gate oxide thickness of about 1 nanometer, silicon channel thickness of about 3 nanometers, and channel length of about 10 nanometers. The development of interconnecting wires for these transistors presents a major challenge to the achievement of nanoelectronics for TSI.

  15. Estimation of total discharged mass from the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014

    NASA Astrophysics Data System (ADS)

    Takarada, Shinji; Oikawa, Teruki; Furukawa, Ryuta; Hoshizumi, Hideo; Itoh, Jun'ichi; Geshi, Nobuo; Miyagi, Isoji

    2016-08-01

    The total mass discharged by the phreatic eruption of Ontake Volcano, central Japan, on September 27, 2014, was estimated using several methods. The estimated discharged mass was 1.2 × 106 t (segment integration method), 8.9 × 105 t (Pyle's exponential method), and varied from 8.6 × 103 to 2.5 × 106 t (Hayakawa's single isopach method). The segment integration and Pyle's exponential methods gave similar values. The single isopach method, however, gave a wide range of results depending on which contour was used. Therefore, the total discharged mass of the 2014 eruption is estimated at between 8.9 × 105 and 1.2 × 106 t. More than 90 % of the total mass accumulated within the proximal area. This shows how important it is to include a proximal area field survey for the total mass estimation of phreatic eruptions. A detailed isopleth mass distribution map was prepared covering as far as 85 km from the source. The main ash-fall dispersal was ENE in the proximal and medial areas and E in the distal area. The secondary distribution lobes also extended to the S and NW proximally, reflecting the effects of elutriation ash and surge deposits from pyroclastic density currents during the phreatic eruption. The total discharged mass of the 1979 phreatic eruption was also calculated for comparison. The resulting volume of 1.9 × 106 t (using the segment integration method) indicates that it was about 1.6-2.1 times larger than the 2014 eruption. The estimated average discharged mass flux rate of the 2014 eruption was 1.7 × 108 kg/h and for the 1979 eruption was 1.0 × 108 kg/h. One of the possible reasons for the higher flux rate of the 2014 eruption is the occurrence of pyroclastic density currents at the summit area.

  16. Inference of Epidemiological Dynamics Based on Simulated Phylogenies Using Birth-Death and Coalescent Models

    PubMed Central

    Boskova, Veronika; Bonhoeffer, Sebastian; Stadler, Tanja

    2014-01-01

    Quantifying epidemiological dynamics is crucial for understanding and forecasting the spread of an epidemic. The coalescent and the birth-death model are used interchangeably to infer epidemiological parameters from the genealogical relationships of the pathogen population under study, which in turn are inferred from the pathogen genetic sequencing data. To compare the performance of these widely applied models, we performed a simulation study. We simulated phylogenetic trees under the constant rate birth-death model and the coalescent model with a deterministic exponentially growing infected population. For each tree, we re-estimated the epidemiological parameters using both a birth-death and a coalescent based method, implemented as an MCMC procedure in BEAST v2.0. In our analyses that estimate the growth rate of an epidemic based on simulated birth-death trees, the point estimates such as the maximum a posteriori/maximum likelihood estimates are not very different. However, the estimates of uncertainty are very different. The birth-death model had a higher coverage than the coalescent model, i.e. contained the true value in the highest posterior density (HPD) interval more often (2–13% vs. 31–75% error). The coverage of the coalescent decreases with decreasing basic reproductive ratio and increasing sampling probability of infecteds. We hypothesize that the biases in the coalescent are due to the assumption of deterministic rather than stochastic population size changes. Both methods performed reasonably well when analyzing trees simulated under the coalescent. The methods can also identify other key epidemiological parameters as long as one of the parameters is fixed to its true value. In summary, when using genetic data to estimate epidemic dynamics, our results suggest that the birth-death method will be less sensitive to population fluctuations of early outbreaks than the coalescent method that assumes a deterministic exponentially growing infected population. PMID:25375100

  17. SPECTRAL data-based estimation of soil heat flux

    USGS Publications Warehouse

    Singh, Ramesh K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.

    2011-01-01

    Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.

  18. Estimation of Moisture Content of Forest Canopy and Floor from SAR Data Part II: Trunk-Ground Double-Bounce Case

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.; Saatchi, S.

    1996-01-01

    Several scattering mechanisms contribute to the total radar backscatter cross section measured by the synthetic aperture radar. These are volume scattering, trunk-ground double-bounce scattering, branch-ground double-bounce scattering, and surface scattering. All of these mechanisms are directly related to the dielectric constant of forest components responsible for that mechanism and their moisture.

  19. DS — Software for analyzing data collected using double sampling

    USGS Publications Warehouse

    Bart, Jonathan; Hartley, Dana

    2011-01-01

    DS analyzes count data to estimate density or relative density and population size when appropriate. The software is available at http://iwcbm.dev4.fsr.com/IWCBM/default.asp?PageID=126. The software was designed to analyze data collected using double sampling, but it also can be used to analyze index data. DS is not currently configured to apply distance methods or methods based on capture-recapture theory. Double sampling for the purpose of this report means surveying a sample of locations with a rapid method of unknown accuracy and surveying a subset of these locations using a more intensive method assumed to yield unbiased estimates. "Detection ratios" are calculated as the ratio of results from rapid surveys on intensive plots to the number actually present as determined from the intensive surveys. The detection ratios are used to adjust results from the rapid surveys. The formula for density is (results from rapid survey)/(estimated detection ratio from intensive surveys). Population sizes are estimated as (density)(area). Double sampling is well-established in the survey sampling literature—see Cochran (1977) for the basic theory, Smith (1995) for applications of double sampling in waterfowl surveys, Bart and Earnst (2002, 2005) for discussions of its use in wildlife studies, and Bart and others (in press) for a detailed account of how the method was used to survey shorebirds across the arctic region of North America. Indices are surveys that do not involve complete counts of well-defined plots or recording information to estimate detection rates (Thompson and others, 1998). In most cases, such data should not be used to estimate density or population size but, under some circumstances, may be used to compare two densities or estimate how density changes through time or across space (Williams and others, 2005). The Breeding Bird Survey (Sauer and others, 2008) provides a good example of an index survey. Surveyors record all birds detected but do not record any information, such as distance or whether each bird is recorded in subperiods, that could be used to estimate detection rates. Nonetheless, the data are widely used to estimate temporal trends and spatial patterns in abundance (Sauer and others, 2008). DS produces estimates of density (or relative density for indices) by species and stratum. Strata are usually defined using region and habitat but other variables may be used, and the entire study area may be classified as a single stratum. Population size in each stratum and for the entire study area also is estimated for each species. For indices, the estimated totals generally are only useful if (a) plots are surveyed so that densities can be calculated and extrapolated to the entire study area and (b) if the detection rates are close to 1.0. All estimates are accompanied by standard errors (SE) and coefficients of variation (CV, that is, SE/estimate).

  20. Patterns of Reinforcement and the Essential Values of Brands: I. Incorporation of Utilitarian and Informational Reinforcement into the Estimation of Demand

    ERIC Educational Resources Information Center

    Yan, Ji; Foxall, Gordon R.; Doyle, John R.

    2012-01-01

    Essential value is defined by Hursh and Silberberg (2008) as the value of reinforcers, presented in an exponential model (Equation 1). This study extends previous research concerned with animal behavior or human responding in therapeutic situations. We applied 9 available demand curves to consumer data that included 10,000+ data points collected…

  1. Techniques for the determination of mass properties of earth-to-orbit transportation systems

    NASA Technical Reports Server (NTRS)

    Macconochie, I. O.; Klich, P. J.

    1978-01-01

    One estimating technique involves trending whereby projections of overall mass properties of vehicles are determined with few inputs. The second technique involves trending of individual subsystems using equations of the form KXN to the nth power or KX. Some constants and exponentials are provided for sample subsystems. Mass properties are reported in a format recommended by mil spec - 38310.

  2. The footprint of urban heat island effect in China

    Treesearch

    Decheng Zhou; Shuqing Zhao; Liangxia Zhang; Ge Sun; Yongqiang Liu

    2015-01-01

    Urban heat island (UHI) is one major anthropogenic modification to the Earth system that transcends its physical boundary. Using MODIS data from 2003 to 2012, we showed that the UHI effect decayed exponentially toward rural areas for majority of the 32 Chinese cities. We found an obvious urban/ rural temperature “cliff”, and estimated that the footprint of UHI effect (...

  3. Modeling of microporous silicon betaelectric converter with 63Ni plating in GEANT4 toolkit*

    NASA Astrophysics Data System (ADS)

    Zelenkov, P. V.; Sidorov, V. G.; Lelekov, E. T.; Khoroshko, A. Y.; Bogdanov, S. V.; Lelekov, A. T.

    2016-04-01

    The model of electron-hole pairs generation rate distribution in semiconductor is needed to optimize the parameters of microporous silicon betaelectric converter, which uses 63Ni isotope radiation. By using Monte-Carlo methods of GEANT4 software with ultra-low energy electron physics models this distribution in silicon was calculated and approximated with exponential function. Optimal pore configuration was estimated.

  4. Double hard scattering without double counting

    NASA Astrophysics Data System (ADS)

    Diehl, Markus; Gaunt, Jonathan R.; Schönwald, Kay

    2017-06-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  5. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  6. Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users

    PubMed Central

    Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.

    2016-01-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347

  7. Comparing exponential and exponentiated models of drug demand in cocaine users.

    PubMed

    Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W

    2016-12-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. How rapidly does the excess risk of lung cancer decline following quitting smoking? A quantitative review using the negative exponential model.

    PubMed

    Fry, John S; Lee, Peter N; Forey, Barbara A; Coombs, Katharine J

    2013-10-01

    The excess lung cancer risk from smoking declines with time quit, but the shape of the decline has never been precisely modelled, or meta-analyzed. From a database of studies of at least 100 cases, we extracted 106 blocks of RRs (from 85 studies) comparing current smokers, former smokers (by time quit) and never smokers. Corresponding pseudo-numbers of cases and controls (or at-risk) formed the data for fitting the negative exponential model. We estimated the half-life (H, time in years when the excess risk becomes half that for a continuing smoker) for each block, investigated model fit, and studied heterogeneity in H. We also conducted sensitivity analyses allowing for reverse causation, either ignoring short-term quitters (S1) or considering them smokers (S2). Model fit was poor ignoring reverse causation, but much improved for both sensitivity analyses. Estimates of H were similar for all three analyses. For the best-fitting analysis (S1), H was 9.93 (95% CI 9.31-10.60), but varied by sex (females 7.92, males 10.71), and age (<50years 6.98, 70+years 12.99). Given that reverse causation is taken account of, the model adequately describes the decline in excess risk. However, estimates of H may be biased by factors including misclassification of smoking status. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Spatiotemporal Analysis of Coronal Loops Using Seismology of Damped Kink Oscillations and Forward Modeling of EUV Intensity Profiles

    NASA Astrophysics Data System (ADS)

    Pascoe, D. J.; Anfinogentov, S. A.; Goddard, C. R.; Nakariakov, V. M.

    2018-06-01

    The shape of the damping profile of kink oscillations in coronal loops has recently allowed the transverse density profile of the loop to be estimated. This requires accurate measurement of the damping profile that can distinguish the Gaussian and exponential damping regimes, otherwise there are more unknowns than observables. Forward modeling of the transverse intensity profile may also be used to estimate the width of the inhomogeneous layer of a loop, providing an independent estimate of one of these unknowns. We analyze an oscillating loop for which the seismological determination of the transverse structure is inconclusive except when supplemented by additional spatial information from the transverse intensity profile. Our temporal analysis describes the motion of a coronal loop as a kink oscillation damped by resonant absorption, and our spatial analysis is based on forward modeling the transverse EUV intensity profile of the loop under the isothermal and optically thin approximations. We use Bayesian analysis and Markov chain Monte Carlo sampling to apply our spatial and temporal models both individually and simultaneously to our data and compare the results with numerical simulations. Combining the two methods allows both the inhomogeneous layer width and density contrast to be calculated, which is not possible for the same data when each method is applied individually. We demonstrate that the assumption of an exponential damping profile leads to a significantly larger error in the inferred density contrast ratio compared with a Gaussian damping profile.

  10. Phosphorus-zinc interactive effects on growth by Selenastrum capricornutum (chlorophyta)

    USGS Publications Warehouse

    Kuwabara, J.S.

    1985-01-01

    Culturing experiments in chemically defined growth media were conducted to observe possible Zn and P interactions on Selenastrum capricornutum Printz growth indexes. Elevated Zn concentrations (7.5 ?? 10-8 and 1.5 ?? 10-7 M [Zn2+]) were highly detrimental to algal growth, affecting lag, exponential, and stationary growth phases. P behaved as a yield-limiting nutrient with maximum cell densities increasing linearly with total P. This yield limitation was intensified at elevated Zn concentrations. Although calculated cellular phosphorus concentrations increased markedly with Zn ion activity, elevated Zn concentrations had no apparent effect on rates of phosphorus uptake estimated for Selenastrum during exponential growth. Results indicated that P-Zn interactions were significant in describing Selenastrum cell yield results and are consistent with previous Zn studies on chlorophytes. These P-Zn interactions and the observed inhibitory growth effects of submicromolar Zn concentrations suggest that in nature an apparent P yield-limiting condition may result from elevated Zn concentrations.

  11. Elastic Cheerios effect: Self-assembly of cylinders on a soft solid

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Aditi; Ryan, Louis; Chaudhury, Manoj K.; Mahadevan, L.

    2015-12-01

    A rigid cylinder placed on a soft gel deforms its surface. When multiple cylinders are placed on the surface, they interact with each other via the topography of the deformed gel which serves as an energy landscape; as they move, the landscape changes which in turn changes their interaction. We use a combination of experiments, simple scaling estimates and numerical simulations to study the self-assembly of cylinders in this elastic analog of the "Cheerios Effect", which describes capillary interactions on a fluid interface. Our results show that the effective two-body interaction can be well described by an exponential attraction potential as a result of which the dynamics also show an exponential behavior with respect to the separation distance. When many cylinders are placed on the gel, the cylinders cluster together if they are not too far apart; otherwise their motion gets elastically arrested.

  12. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  13. On the conditions of exponential stability in active disturbance rejection control based on singular perturbation analysis

    NASA Astrophysics Data System (ADS)

    Shao, S.; Gao, Z.

    2017-10-01

    Stability of active disturbance rejection control (ADRC) is analysed in the presence of unknown, nonlinear, and time-varying dynamics. In the framework of singular perturbations, the closed-loop error dynamics are semi-decoupled into a relatively slow subsystem (the feedback loop) and a relatively fast subsystem (the extended state observer), respectively. It is shown, analytically and geometrically, that there exists a unique exponential stable solution if the size of the initial observer error is sufficiently small, i.e. in the same order of the inverse of the observer bandwidth. The process of developing the uniformly asymptotic solution of the system reveals the condition on the stability of the ADRC and the relationship between the rate of change in the total disturbance and the size of the estimation error. The differentiability of the total disturbance is the only assumption made.

  14. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  15. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  16. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  17. Malthusian Parameters as Estimators of the Fitness of Microbes: A Cautionary Tale about the Low Side of High Throughput.

    PubMed

    Concepción-Acevedo, Jeniffer; Weiss, Howard N; Chaudhry, Waqas Nasir; Levin, Bruce R

    2015-01-01

    The maximum exponential growth rate, the Malthusian parameter (MP), is commonly used as a measure of fitness in experimental studies of adaptive evolution and of the effects of antibiotic resistance and other genes on the fitness of planktonic microbes. Thanks to automated, multi-well optical density plate readers and computers, with little hands-on effort investigators can readily obtain hundreds of estimates of MPs in less than a day. Here we compare estimates of the relative fitness of antibiotic susceptible and resistant strains of E. coli, Pseudomonas aeruginosa and Staphylococcus aureus based on MP data obtained with automated multi-well plate readers with the results from pairwise competition experiments. This leads us to question the reliability of estimates of MP obtained with these high throughput devices and the utility of these estimates of the maximum growth rates to detect fitness differences.

  18. pathChirp: Efficient Available Bandwidth Estimation for Network Paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottrell, Les

    2003-04-30

    This paper presents pathChirp, a new active probing tool for estimating the available bandwidth on a communication network path. Based on the concept of ''self-induced congestion,'' pathChirp features an exponential flight pattern of probes we call a chirp. Packet chips offer several significant advantages over current probing schemes based on packet pairs or packet trains. By rapidly increasing the probing rate within each chirp, pathChirp obtains a rich set of information from which to dynamically estimate the available bandwidth. Since it uses only packet interarrival times for estimation, pathChirp does not require synchronous nor highly stable clocks at the sendermore » and receiver. We test pathChirp with simulations and Internet experiments and find that it provides good estimates of the available bandwidth while using only a fraction of the number of probe bytes that current state-of-the-art techniques use.« less

  19. The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation

    PubMed Central

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-01-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333

  20. The impact of accelerating faster than exponential population growth on genetic variation.

    PubMed

    Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian

    2014-03-01

    Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.

Top