[Application of exponential smoothing method in prediction and warning of epidemic mumps].
Shi, Yun-ping; Ma, Jia-qi
2010-06-01
To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.
1996-09-16
approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in
Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.
ERIC Educational Resources Information Center
Mandell, Marvin B.; Bretschneider, Stuart I.
1984-01-01
The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)
Performance of time-series methods in forecasting the demand for red blood cell transfusion.
Pereira, Arturo
2004-05-01
Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.
Rigby, Robert A; Stasinopoulos, D Mikis
2004-10-15
The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jamaluddin, Fadhilah; Rahim, Rahela Abdul
2015-12-01
Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.
How bootstrap can help in forecasting time series with more than one seasonal pattern
NASA Astrophysics Data System (ADS)
Cordeiro, Clara; Neves, M. Manuela
2012-09-01
The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.
Demand Forecasting: DLA’S Aviation Supply Chain High Value Products
2015-04-09
program at USS CONSTELLATION (CV 64), San Diego CA LCDR Carlos Lopez Education MBA in Supply Chain Management, Naval Postgraduate School BS in...Exponential Smoothing Forecasts ............... 118 xv Figure 80. NIIN 01-463-4340 Seasonal Exponential Smoothing Forecast .............. 119 Figure...5310 Seasonal Exponential Smoothing ............................ 142 Figure 102. NIIN 01-507-5310 12-Month Forecast Simulation
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.
2018-04-01
The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.
Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?
ERIC Educational Resources Information Center
Ravinder, Handanhal V.
2013-01-01
A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…
NASA Astrophysics Data System (ADS)
Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut
2018-03-01
Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.
Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian
2014-06-01
In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Nonholonomic stability aspects of piecewise holonomic systems
NASA Astrophysics Data System (ADS)
Ruina, Andy
1998-10-01
We consider mechanical systems with intermittent contact that are smooth and holonomic except at the instants of transition. Overall such systems can be nonholonomic in that the accessible configuration space can have larger dimension than the instantaneous motions allowed by the constraints. The known examples of such mechanical systems are also dissipative. By virtue of their nonholonomy and of their dissipation such systems are not Hamiltonian. Thus there is no reason to expect them to adhere to the Hamiltonian property that exponential stability of steady motions is impossible. Since nonholonomy and energy dissipation are simultaneously present in these systems, it is usually not clear whether their sometimes-observed exponential stability should be attributed solely to dissipation, to nonholonomy, or to both. However, it is shown here on the basis of one simple example, that the observed exponential stability of such systems can follow solely from the nonholonomic nature of intermittent contact and not from dissipation. In particular, it is shown that a discrete sister model of the Chaplygin sleigh, a rigid body on the plane constrained by one skate, inherits the stability eigenvalues of the smooth system even as the dissipation tends to zero. Thus it seems that discrete nonholonomy can contribute to exponential stability of mechanical systems.
NASA Astrophysics Data System (ADS)
Ismail, A.; Hassan, Noor I.
2013-09-01
Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights
NASA Astrophysics Data System (ADS)
Damelin, S. B.; Jung, H. S.
2005-01-01
For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.
Forecasting Performance of Grey Prediction for Education Expenditure and School Enrollment
ERIC Educational Resources Information Center
Tang, Hui-Wen Vivian; Yin, Mu-Shang
2012-01-01
GM(1,1) and GM(1,1) rolling models derived from grey system theory were estimated using time-series data from projection studies by National Center for Education Statistics (NCES). An out-of-sample forecasting competition between the two grey prediction models and exponential smoothing used by NCES was conducted for education expenditure and…
Large and small-scale structures and the dust energy balance problem in spiral galaxies
NASA Astrophysics Data System (ADS)
Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.
2015-04-01
The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.
Nuclear counting filter based on a centered Skellam test and a double exponential smoothing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan
2015-07-01
Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less
Systematic strategies for the third industrial accident prevention plan in Korea.
Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung
2012-01-01
To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less
Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won
2011-01-01
To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
Exponential smoothing weighted correlations
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2012-06-01
In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.
Galileon bounce after ekpyrotic contraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osipov, M.; Rubakov, V., E-mail: osipov@ms2.inr.ac.ru, E-mail: rubakov@ms2.inr.ac.ru
We consider a simple cosmological model that includes a long ekpyrotic contraction stage and smooth bounce after it. Ekpyrotic behavior is due to a scalar field with a negative exponential potential, whereas the Galileon field produces bounce. We give an analytical picture of how the bounce occurs within the weak gravity regime, and then perform numerical analysis to extend our results to a non-perturbative regime.
Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts
NASA Technical Reports Server (NTRS)
Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.
2007-01-01
We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.
Smoothing Polymer Surfaces by Solvent-Vapor Exposure
NASA Astrophysics Data System (ADS)
Anthamatten, Mitchell
2003-03-01
Ultra-smooth polymer surfaces are of great importance in a large body of technical applications such as optical coatings, supermirrors, waveguides, paints, and fusion targets. We are investigating a simple approach to controlling surface roughness: by temporarily swelling the polymer with solvent molecules. As the solvent penetrates into the polymer, its viscosity is lowered, and surface tension forces drive surface flattening. To investigate sorption kinetics and surface-smoothing phenomena, a series of vapor-deposited poly(amic acid) films were exposed to dimethyl sulfoxide vapors. During solvent exposure, the surface topology was continuously monitored using light interference microscopy. The resulting power spectra indicate that high-frequency defects smooth faster than low-frequency defects. This frequency dependence was studied by depositing polymer films onto a series of 2D sinusoidal surfaces and performing smoothing experiments. Results show that the amplitudes of the sinusoidal surfaces decay exponentially with solvent exposure time, and the exponential decay constants are proportional to surface frequency. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.
Exponentially accurate approximations to piece-wise smooth periodic functions
NASA Technical Reports Server (NTRS)
Greer, James; Banerjee, Saheb
1995-01-01
A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.
Demand forecasting of electricity in Indonesia with limited historical data
NASA Astrophysics Data System (ADS)
Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif
2018-03-01
Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).
The Feasibility of Using Computer and Internet in Teaching Family Education for the 8th Grade Class
ERIC Educational Resources Information Center
Alluhaydan, Nuwayyir Saleh F.
2016-01-01
This paper is just a sample template for the prospective authors of IISTE. Over the decades, the concepts of holons and holonic systems have been adopted in many research fields, but they are scarcely attempted on labour planning. A literature gap exists, thus motivating the author to come up with a holonic model that uses exponential smoothing to…
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
Bildirici, Melike; Ersin, Özgür Ömer
2018-01-01
The study aims to combine the autoregressive distributed lag (ARDL) cointegration framework with smooth transition autoregressive (STAR)-type nonlinear econometric models for causal inference. Further, the proposed STAR distributed lag (STARDL) models offer new insights in terms of modeling nonlinearity in the long- and short-run relations between analyzed variables. The STARDL method allows modeling and testing nonlinearity in the short-run and long-run parameters or both in the short- and long-run relations. To this aim, the relation between CO 2 emissions and economic growth rates in the USA is investigated for the 1800-2014 period, which is one of the largest data sets available. The proposed hybrid models are the logistic, exponential, and second-order logistic smooth transition autoregressive distributed lag (LSTARDL, ESTARDL, and LSTAR2DL) models combine the STAR framework with nonlinear ARDL-type cointegration to augment the linear ARDL approach with smooth transitional nonlinearity. The proposed models provide a new approach to the relevant econometrics and environmental economics literature. Our results indicated the presence of asymmetric long-run and short-run relations between the analyzed variables that are from the GDP towards CO 2 emissions. By the use of newly proposed STARDL models, the results are in favor of important differences in terms of the response of CO 2 emissions in regimes 1 and 2 for the estimated LSTAR2DL and LSTARDL models.
Experimental tests of truncated diffusion in fault damage zones
NASA Astrophysics Data System (ADS)
Suzuki, Anna; Hashida, Toshiyuki; Li, Kewen; Horne, Roland N.
2016-11-01
Fault zones affect the flow paths of fluids in groundwater aquifers and geological reservoirs. Fault-related fracture damage decreases to background levels with increasing distance from the fault core according to a power law. This study investigated mass transport in such a fault-related structure using nonlocal models. A column flow experiment is conducted to create a permeability distribution that varies with distance from a main conduit. The experimental tracer response curve is preasymptotic and implies subdiffusive transport, which is slower than the normal Fickian diffusion. If the surrounding area is a finite domain, an upper truncated behavior in tracer response (i.e., exponential decline at late times) is observed. The tempered anomalous diffusion (TAD) model captures the transition from subdiffusive to Fickian transport, which is characterized by a smooth transition from power-law to an exponential decline in the late-time breakthrough curves.
An Optimization of Inventory Demand Forecasting in University Healthcare Centre
NASA Astrophysics Data System (ADS)
Bon, A. T.; Ng, T. K.
2017-01-01
Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.
Elastic wave generated by granular impact on rough and erodible surfaces
NASA Astrophysics Data System (ADS)
Bachelet, Vincent; Mangeney, Anne; de Rosny, Julien; Toussaint, Renaud; Farin, Maxime
2018-01-01
The elastic waves generated by impactors hitting rough and erodible surfaces are studied. For this purpose, beads of variable materials, diameters, and velocities are dropped on (i) a smooth PMMA plate, (ii) stuck glass beads on the PMMA plate to create roughness, and (iii) the rough plate covered with layers of free particles to investigate erodible beds. The Hertz model validity to describe impacts on a smooth surface is confirmed. For rough and erodible surfaces, an empirical scaling law that relates the elastic energy to the radius Rb and normal velocity Vz of the impactor is deduced from experimental data. In addition, the radiated elastic energy is found to decrease exponentially with respect to the bed thickness. Lastly, we show that the variability of the elastic energy among shocks increases from some percents to 70% between smooth and erodible surfaces. This work is a first step to better quantify seismic emissions of rock impacts in natural environment, in particular on unconsolidated soils.
Non-extensive quantum statistics with particle-hole symmetry
NASA Astrophysics Data System (ADS)
Biró, T. S.; Shen, K. M.; Zhang, B. W.
2015-06-01
Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
Upper arm circumference development in Chinese children and adolescents: a pooled analysis.
Tong, Fang; Fu, Tong
2015-05-30
Upper arm development in children is different in different ethnic groups. There have been few reports on upper arm circumference (UAC) at different stages of development in children and adolescents in China. The purpose of this study was to provide a reference for growth with weighted assessment of the overall level of development. Using a pooled analysis, an authoritative journal database search and reports of UAC, we created a new database on developmental measures in children. In conducting a weighted analysis, we compared reference values for 0~60 months of development according to the World Health Organization (WHO) statistics considering gender and nationality and used Z values as interval values for the second sampling to obtain an exponential smooth curve to analyze the mean, standard deviation, and sites of attachment. Ten articles were included in the pooled analysis, and these articles included participants from different areas of China. The point of intersection with the WHO curve was 3.5 years with higher values at earlier ages and lower values at older ages. Boys curve was steeper after puberty. The curves in the studies had a merged line compatible. The Z values of exponential smoothing showed the curves were similar for body weight and had a right normal distribution. The integrated index of UAC in Chinese children and adolescents indicated slightly variations with regions. Exponential curve smoothing was suitable for assessment at different developmental stages.
Exponentially damped Lévy flights, multiscaling, and exchange rates
NASA Astrophysics Data System (ADS)
Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; Rathie, Pushpa; Da Silva, Sergio
2004-02-01
We employ our previously suggested exponentially damped Lévy flight (Physica A 326 (2003) 544) to study the multiscaling properties of 30 daily exchange rates against the US dollar together with a fictitious euro-dollar rate (Physica A 286 (2000) 353). Though multiscaling is not theoretically seen in either stable Lévy processes or abruptly truncated Lévy flights, it is even characteristic of smoothly truncated Lévy flights (Phys. Lett. A 266 (2000) 282; Eur. Phys. J. B 4 (1998) 143). We have already defined a class of “quasi-stable” processes in connection with the finding that single scaling is pervasive among the dollar price of foreign currencies (Physica A 323 (2003) 601). Here we show that the same goes as far as multiscaling is concerned. Our novel findings incidentally reinforce the case for real-world relevance of the Lévy flights for modeling financial prices.
Optical coherence tomography assessment of vessel wall degradation in aneurysmatic thoracic aortas
NASA Astrophysics Data System (ADS)
Real, Eusebio; Eguizabal, Alma; Pontón, Alejandro; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José; Conde, Olga M.
2013-06-01
Optical coherence tomographic images of ascending thoracic human aortas from aneurysms exhibit disorders on the smooth muscle cell structure of the media layer of the aortic vessel as well as elastin degradation. Ex-vivo measurements of human samples provide results that correlate with pathologist diagnosis in aneurysmatic and control aortas. The observed disorders are studied as possible hallmarks for aneurysm diagnosis. To this end, the backscattering profile along the vessel thickness has been evaluated by fitting its decay against two different models, a third order polynomial fitting and an exponential fitting. The discontinuities present on the vessel wall on aneurysmatic aortas are slightly better identified with the exponential approach. Aneurysmatic aortic walls present uneven reflectivity decay when compared with healthy vessels. The fitting error has revealed as the most favorable indicator for aneurysm diagnosis as it provides a measure of how uniform is the decay along the vessel thickness.
Sodium 22+ washout from cultured rat cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kino, M.; Nakamura, A.; Hopp, L.
1986-10-01
The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less
The Halo mass function from Excursion Set Theory. II. The Diffusing Barrier
NASA Astrophysics Data System (ADS)
Maggiore, Michele; Riotto, Antonio
2010-07-01
In excursion set theory, the computation of the halo mass function is mapped into a first-passage time process in the presence of a barrier, which in the spherical collapse model is a constant and in the ellipsoidal collapse model is a fixed function of the variance of the smoothed density field. However, N-body simulations show that dark matter halos grow through a mixture of smooth accretion, violent encounters, and fragmentations, and modeling halo collapse as spherical, or even as ellipsoidal, is a significant oversimplification. In addition, the very definition of what is a dark matter halo, both in N-body simulations and observationally, is a difficult problem. We propose that some of the physical complications inherent to a realistic description of halo formation can be included in the excursion set theory framework, at least at an effective level, by taking into account that the critical value for collapse is not a fixed constant δ c , as in the spherical collapse model, nor a fixed function of the variance σ of the smoothed density field, as in the ellipsoidal collapse model, but rather is itself a stochastic variable, whose scatter reflects a number of complicated aspects of the underlying dynamics. Solving the first-passage time problem in the presence of a diffusing barrier we find that the exponential factor in the Press-Schechter mass function changes from exp{-δ2 c /2σ2} to exp{-aδ2 c /2σ2}, where a = 1/(1 + DB ) and DB is the diffusion coefficient of the barrier. The numerical value of DB , and therefore the corresponding value of a, depends among other things on the algorithm used for identifying halos. We discuss the physical origin of the stochasticity of the barrier and, from recent N-body simulations that studied the properties of the collapse barrier, we deduce a value DB ~= 0.25. Our model then predicts a ~= 0.80, in excellent agreement with the exponential fall off of the mass function found in N-body simulations, for the same halo definition. Combining this result with the non-Markovian corrections computed in Paper I of this series, we derive an analytic expression for the halo mass function for Gaussian fluctuations and we compare it with N-body simulations.
Small area population forecasting: some experience with British models.
Openshaw, S; Van Der Knaap, G A
1983-01-01
This study is concerned with the evaluation of the various models including time-series forecasts, extrapolation, and projection procedures, that have been developed to prepare population forecasts for planning purposes. These models are evaluated using data for the Netherlands. "As part of a research project at the Erasmus University, space-time population data has been assembled in a geographically consistent way for the period 1950-1979. These population time series are of sufficient length for the first 20 years to be used to build models and then evaluate the performance of the model for the next 10 years. Some 154 different forecasting models for 832 municipalities have been evaluated. It would appear that the best forecasts are likely to be provided by either a Holt-Winters model, or a ratio-correction model, or a low order exponential-smoothing model." excerpt
NASA Astrophysics Data System (ADS)
Schaefer, Bradley E.; Dyson, Samuel E.
1996-08-01
A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.
A simulation-based approach for solving assembly line balancing problem
NASA Astrophysics Data System (ADS)
Wu, Xiaoyu
2017-09-01
Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.
Three-dimensional Structure of the Milky Way Dust: Modeling of LAMOST Data
NASA Astrophysics Data System (ADS)
Li, Linlin; Shen, Shiyin; Hou, Jinliang; Yuan, Haibo; Xiang, Maosheng; Chen, Bingqiu; Huang, Yang; Liu, Xiaowei
2018-05-01
We present a three-dimensional modeling of the Milky Way dust distribution by fitting the value-added star catalog of the LAMOST spectral survey. The global dust distribution can be described by an exponential disk with a scale length of 3192 pc and a scale height of 103 pc. In this modeling, the Sun is located above the dust disk with a vertical distance of 23 pc. Besides the global smooth structure, two substructures around the solar position are also identified. The one located at 150° < l < 200° and ‑5° < b < ‑30° is consistent with the Gould Belt model of Gontcharov, and the other one located at 140° < l < 165° and 0° < b < 15° is associated with the Camelopardalis molecular clouds.
VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)
NASA Astrophysics Data System (ADS)
Jenkins, J. S.; Tuomi, M.
2017-05-01
We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).
Kim, Keonwook
2013-08-23
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.
Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.
Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra
2014-04-01
To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.
Amplitude, Latency, and Peak Velocity in Accommodation and Disaccommodation Dynamics
Papadatou, Eleni; Ferrer-Blasco, Teresa; Montés-Micó, Robert
2017-01-01
The aim of this work was to ascertain whether there are differences in amplitude, latency, and peak velocity of accommodation and disaccommodation responses when different analysis strategies are used to compute them, such as fitting different functions to the responses or for smoothing them prior to computing the parameters. Accommodation and disaccommodation responses from four subjects to pulse changes in demand were recorded by means of aberrometry. Three different strategies were followed to analyze such responses: fitting an exponential function to the experimental data; fitting a Boltzmann sigmoid function to the data; and smoothing the data. Amplitude, latency, and peak velocity of the responses were extracted. Significant differences were found between the peak velocity in accommodation computed by fitting an exponential function and smoothing the experimental data (mean difference 2.36 D/s). Regarding disaccommodation, significant differences were found between latency and peak velocity, calculated with the two same strategies (mean difference of 0.15 s and −3.56 D/s, resp.). The strategy used to analyze accommodation and disaccommodation responses seems to affect the parameters that describe accommodation and disaccommodation dynamics. These results highlight the importance of choosing the most adequate analysis strategy in each individual to obtain the parameters that characterize accommodation and disaccommodation dynamics. PMID:29226128
Amplitude Scintillation due to Atmospheric Turbulence for the Deep Space Network Ka-Band Downlink
NASA Technical Reports Server (NTRS)
Ho, C.; Wheelon, A.
2004-01-01
Fast amplitude variations due to atmospheric scintillation are the main concerns for the Deep Space Network (DSN) Ka-band downlink under clear weather conditions. A theoretical study of the amplitude scintillation variances for a finite aperture antenna is presented. Amplitude variances for weak scattering scenarios are examined using turbulence theory to describe atmospheric irregularities. We first apply the Kolmogorov turbulent spectrum to a point receiver for three different turbulent profile models, especially for an exponential model varying with altitude. These analytic solutions then are extended to a receiver with a finite aperture antenna for the three profile models. Smoothing effects of antenna aperture are expressed by gain factors. A group of scaling factor relations is derived to show the dependences of amplitude variances on signal wavelength, antenna size, and elevation angle. Finally, we use these analytic solutions to estimate the scintillation intensity for a DSN Goldstone 34-m receiving station. We find that the (rms) amplitude fluctuation is 0.13 dB at 20-deg elevation angle for an exponential model, while the fluctuation is 0.05 dB at 90 deg. These results will aid us in telecommunication system design and signal-fading prediction. They also provide a theoretical basis for further comparison with other measurements at Ka-band.
All-sky monitor observations of the decay of A0620-00 (Nova monocerotis 1975)
NASA Technical Reports Server (NTRS)
Kaluzienski, L. J.; Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J.
1976-01-01
The All-Sky X-ray Monitor onboard Ariel 5 has observed the 3-6 keV decline of the bright transient X-ray source A0620-00 on a virtually continuous basis during the period September 1975 - March 1976. The source behavior on timescales 100 minutes is characterized by smooth, exponential decays interrupted by substantial increases in October and February. The latter increase was an order-of-magnitude rise above the extrapolated exponential fall-off, and was followed by a final rapid decline. Upper limits of 2.5% and 10% were found for any periodicities in the range 0d.2 - 10d during the early and later decay phases, respectively. A probable correlation between the optical and 3-6 keV emission from A0620-00 was noted, effectively ruling out models involving traditional optical novae in favor of Roche-lobe overflow in a binary system. The existing data on the transient X-ray sources is consistent with two distinct luminosity-lifetime classes of these objects.
Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.
Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin
2016-10-01
Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.
Adaptive regularization of the NL-means: application to image and video denoising.
Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François
2014-08-01
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
Cross-bridge elasticity in single smooth muscle cells
1983-01-01
In smooth muscle, a cross-bridge mechanism is believed to be responsible for active force generation and fiber shortening. In the present studies, the viscoelastic and kinetic properties of the cross- bridge were probed by eliciting tension transients in response to small, rapid, step length changes (delta L = 0.3-1.0% Lcell in 2 ms). Tension transients were obtained in a single smooth muscle cell isolated from the toad (Bufo marinus) stomach muscularis, which was tied between a force transducer and a displacement device. To record the transients, which were of extremely small magnitude (0.1 microN), a high-frequency (400 Hz), ultrasensitive force transducer (18 mV/microN) was designed and built. The transients obtained during maximal force generation (Fmax = 2.26 microN) were characterized by a linear elastic response (Emax = 1.26 X 10(4) mN/mm2) coincident with the length step, which was followed by a biphasic tension recovery made up of two exponentials (tau fast = 5-20 ms, tau slow = 50-300 ms). During the development of force upon activation, transients were elicited. The relationship between stiffness and force was linear, which suggests that the transients originate within the cross-bridge and reflect the cross-bridge's viscoelastic and kinetic properties. The observed fiber elasticity suggests that the smooth muscle cross-bridge is considerably more compliant than in fast striated muscle. A thermodynamic model is presented that allows for an analysis of the factors contributing to the increased compliance of the smooth muscle cross-bridge. PMID:6413640
Decay of random correlation functions for unimodal maps
NASA Astrophysics Data System (ADS)
Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique
2000-10-01
Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.
Forecasting in foodservice: model development, testing, and evaluation.
Miller, J L; Thompson, P A; Orabella, M M
1991-05-01
This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.
The Prediction of Teacher Turnover Employing Time Series Analysis.
ERIC Educational Resources Information Center
Costa, Crist H.
The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…
Kim, Keonwook
2013-01-01
The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482
Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi
2017-08-01
The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
Artificial Neural Network versus Linear Models Forecasting Doha Stock Market
NASA Astrophysics Data System (ADS)
Yousif, Adil; Elfaki, Faiz
2017-12-01
The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.
Modeling Day-to-day Flow Dynamics on Degradable Transport Network
Gao, Bo; Zhang, Ronghui; Lou, Xiaoming
2016-01-01
Stochastic link capacity degradations are common phenomena in transport network which can cause travel time variations and further can affect travelers’ daily route choice behaviors. This paper formulates a deterministic dynamic model, to capture the day-to-day (DTD) flow evolution process in the presence of degraded link capacity degradations. The aggregated network flow dynamics are driven by travelers’ study of uncertain travel time and their choice of risky routes. This paper applies the exponential-smoothing filter to describe travelers’ study of travel time variations, and meanwhile formulates risk attitude parameter updating equation to reflect travelers’ endogenous risk attitude evolution schema. In addition, this paper conducts theoretical analyses to investigate several significant mathematical characteristics implied in the proposed DTD model, including fixed point existence, uniqueness, stability and irreversibility. Numerical experiments are used to demonstrate the effectiveness of the DTD model and verify some important dynamic system properties. PMID:27959903
Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users
Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.
2016-01-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347
Comparing exponential and exponentiated models of drug demand in cocaine users.
Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W
2016-12-01
Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Forecasting daily patient volumes in the emergency department.
Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L
2008-02-01
Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.
Failure prediction using machine learning and time series in optical network.
Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo
2017-08-07
In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.
Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang
NASA Astrophysics Data System (ADS)
Ikasari, D. M.; Lestari, E. R.; Prastya, E.
2018-03-01
The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.
Galaxy Zoo: evidence for diverse star formation histories through the green valley
NASA Astrophysics Data System (ADS)
Smethurst, R. J.; Lintott, C. J.; Simmons, B. D.; Schawinski, K.; Marshall, P. J.; Bamford, S.; Fortson, L.; Kaviraj, S.; Masters, K. L.; Melvin, T.; Nichol, R. C.; Skibba, R. A.; Willett, K. W.
2015-06-01
Does galaxy evolution proceed through the green valley via multiple pathways or as a single population? Motivated by recent results highlighting radically different evolutionary pathways between early- and late-type galaxies, we present results from a simple Bayesian approach to this problem wherein we model the star formation history (SFH) of a galaxy with two parameters, [t, τ] and compare the predicted and observed optical and near-ultraviolet colours. We use a novel method to investigate the morphological differences between the most probable SFHs for both disc-like and smooth-like populations of galaxies, by using a sample of 126 316 galaxies (0.01 < z < 0.25) with probabilistic estimates of morphology from Galaxy Zoo. We find a clear difference between the quenching time-scales preferred by smooth- and disc-like galaxies, with three possible routes through the green valley dominated by smooth- (rapid time-scales, attributed to major mergers), intermediate- (intermediate time-scales, attributed to minor mergers and galaxy interactions) and disc-like (slow time-scales, attributed to secular evolution) galaxies. We hypothesize that morphological changes occur in systems which have undergone quenching with an exponential time-scale τ < 1.5 Gyr, in order for the evolution of galaxies in the green valley to match the ratio of smooth to disc galaxies observed in the red sequence. These rapid time-scales are instrumental in the formation of the red sequence at earlier times; however, we find that galaxies currently passing through the green valley typically do so at intermediate time-scales.†
Klimarev, S I
2003-01-01
A waveguide SHF plasmotron was chosen for carbon dioxide and hydrogen recycling in a low-temperature plasma in the Bosch reactor. To increase electric intensity within the discharge capacitor, thickness of the waveguide thin wall was changed for 10 mm. A method for calculating the compensated exponential smooth transition to align two similar lines (waveguides) with sections of 72 x 34 mm and 72 x 10 mm to transfer SHF energies from the generator to plasma was proposed. Calculation of the smooth transition has been used in final refinement of the HSF plasmotron design as a component of a physical-chemical LSS.
Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris
2010-01-01
The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375
NASA Astrophysics Data System (ADS)
Challamel, Noël
2018-04-01
The static and dynamic behaviour of a nonlocal bar of finite length is studied in this paper. The nonlocal integral models considered in this paper are strain-based and relative displacement-based nonlocal models; the latter one is also labelled as a peridynamic model. For infinite media, and for sufficiently smooth displacement fields, both integral nonlocal models can be equivalent, assuming some kernel correspondence rules. For infinite media (or finite media with extended reflection rules), it is also shown that Eringen's differential model can be reformulated into a consistent strain-based integral nonlocal model with exponential kernel, or into a relative displacement-based integral nonlocal model with a modified exponential kernel. A finite bar in uniform tension is considered as a paradigmatic static case. The strain-based nonlocal behaviour of this bar in tension is analyzed for different kernels available in the literature. It is shown that the kernel has to fulfil some normalization and end compatibility conditions in order to preserve the uniform strain field associated with this homogeneous stress state. Such a kernel can be built by combining a local and a nonlocal strain measure with compatible boundary conditions, or by extending the domain outside its finite size while preserving some kinematic compatibility conditions. The same results are shown for the nonlocal peridynamic bar where a homogeneous strain field is also analytically obtained in the elastic bar for consistent compatible kinematic boundary conditions at the vicinity of the end conditions. The results are extended to the vibration of a fixed-fixed finite bar where the natural frequencies are calculated for both the strain-based and the peridynamic models.
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-01-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.
NASA Astrophysics Data System (ADS)
Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue
2018-06-01
This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.
Nonlinear analogue of the May−Wigner instability transition
Fyodorov, Yan V.; Khoruzhenko, Boris A.
2016-01-01
We study a system of N≫1 degrees of freedom coupled via a smooth homogeneous Gaussian vector field with both gradient and divergence-free components. In the absence of coupling, the system is exponentially relaxing to an equilibrium with rate μ. We show that, while increasing the ratio of the coupling strength to the relaxation rate, the system experiences an abrupt transition from a topologically trivial phase portrait with a single equilibrium into a topologically nontrivial regime characterized by an exponential number of equilibria, the vast majority of which are expected to be unstable. It is suggested that this picture provides a global view on the nature of the May−Wigner instability transition originally discovered by local linear stability analysis. PMID:27274077
Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data
Young, Alistair A.; Li, Xiaosong
2014-01-01
Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
Forecasting electricity usage using univariate time series models
NASA Astrophysics Data System (ADS)
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
Traveling wavefront solutions to nonlinear reaction-diffusion-convection equations
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Smets, Ruben
2017-08-01
Physically motivated modified Fisher equations are studied in which nonlinear convection and nonlinear diffusion is allowed for besides the usual growth and spread of a population. It is pointed out that in a large variety of cases separable functions in the form of exponentially decaying sharp wavefronts solve the differential equation exactly provided a co-moving point source or sink is active at the wavefront. The velocity dispersion and front steepness may differ from those of some previously studied exact smooth traveling wave solutions. For an extension of the reaction-diffusion-convection equation, featuring a memory effect in the form of a maturity delay for growth and spread, also smooth exact wavefront solutions are obtained. The stability of the solutions is verified analytically and numerically.
Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.
Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David
2010-04-01
We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U
2009-05-01
In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.
Coupled large eddy simulation and discrete element model of bedload motion
NASA Astrophysics Data System (ADS)
Furbish, D.; Schmeeckle, M. W.
2011-12-01
We combine a three-dimensional large eddy simulation of turbulence to a three-dimensional discrete element model of turbulence. The large eddy simulation of the turbulent fluid is extended into the bed composed of non-moving particles by adding resistance terms to the Navier-Stokes equations in accordance with the Darcy-Forchheimer law. This allows the turbulent velocity and pressure fluctuations to penetrate the bed of discrete particles, and this addition of a porous zone results in turbulence structures above the bed that are similar to previous experimental and numerical results for hydraulically-rough beds. For example, we reproduce low-speed streaks that are less coherent than those over smooth-beds due to the episodic outflow of fluid from the bed. Local resistance terms are also added to the Navier-Stokes equations to account for the drag of individual moving particles. The interaction of the spherical particles utilizes a standard DEM soft-sphere Hertz model. We use only a simple drag model to calculate the fluid forces on the particles. The model reproduces an exponential distribution of bedload particle velocities that we have found experimentally using high-speed video of a flat bed of moving sand in a recirculating water flume. The exponential distribution of velocity results from the motion of many particles that are nearly constantly in contact with other bed particles and come to rest after short distances, in combination with a relatively few particles that are entrained further above the bed and have velocities approaching that of the fluid. Entrainment and motion "hot spots" are evident that are not perfectly correlated with the local, instantaneous fluid velocity. Zones of the bed that have recently experienced motion are more susceptible to motion because of the local configuration of particle contacts. The paradigm of a characteristic saltation hop length in riverine bedload transport has infused many aspects of geomorphic thought, including even bedrock erosion. In light of our theoretical, experimental, and numerical findings supporting the exponential distribution of bedload particle motion, the idea of a characteristic saltation hop should be scrapped or substantially modified.
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Pursuit Latency for Chromatic Targets
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Ellis, Stephen R. (Technical Monitor)
1998-01-01
The temporal dynamics of eye movement response to a change in direction of stimulus motion has been used to compare the processing speeds of different types of stimuli (Mulligan, ARVO '97). In this study, the pursuit response to colored targets was measured to test the hypothesis that the slow response of the chromatic system (as measured using traditional temporal sensitivity measures such as contrast sensitivity) results in increased eye movement latencies. Subjects viewed a small (0.4 deg) Gaussian spot which moved downward at a speed of 6.6 deg/sec. At a variable time during the trajectory, the dot's direction of motion changed by 30 degrees, either to the right or left. Subjects were instructed to pursue the spot. Eye movements were measured using a video ophthalmoscope with an angular resolution of approximately 1 arc min and a temporal sampling rate of 60 Hz. Stimuli were modulated in chrominance for a variety of hue directions, combined with a range of small luminance increments and decrements, to insure that some of the stimuli fell in the subjects' equiluminance planes. The smooth portions of the resulting eye movement traces were fit by convolving the stimulus velocity with an exponential having variable onset latency, time constant and amplitude. Smooth eye movements with few saccades were observed for all stimuli. Pursuit responses to stimuli having a significant luminance component are well-fit by exponentials having latencies and time constants on the order of 100 msec. Increases in pursuit response latency on the order of 100-200 msec are observed in response to certain stimuli, which occur in pairs of complementary hues, corresponding to the intersection of the stimulus section with the subjects' equiluminant plane. Smooth eye movements can be made in response to purely chromatic stimuli, but are slower than responses to stimuli with a luminance component.
Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.
Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M
2017-05-16
Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.
Klein, F.W.; Wright, Tim
2008-01-01
The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.
Feedback control policies employed by people using intracortical brain-computer interfaces.
Willett, Francis R; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Saab, Jad; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Simeral, John D; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu
2017-02-01
When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a 'feedback control policy'. A better understanding of these policies may inform the design of higher-performing neural decoders. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users' feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.
Feedback control policies employed by people using intracortical brain-computer interfaces
NASA Astrophysics Data System (ADS)
Willett, Francis R.; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Saab, Jad; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Simeral, John D.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.
2017-02-01
Objective. When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a ‘feedback control policy’. A better understanding of these policies may inform the design of higher-performing neural decoders. Approach. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users’ feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. Main results. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user’s neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor’s current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Significance. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.
NASA Astrophysics Data System (ADS)
Marston, Philip L.
2003-04-01
The coupling of sound to buried targets can be associated with acoustic evanescent waves when the sea bottom is smooth. To understand the excitation of guided waves on buried fluid cylinders and shells by acoustic evanescent waves and the associated target resonances, the two-dimensional partial wave series for the scattering is found for normal incidence in an unbounded medium. The shell formulation uses the simplifications of thin-shell dynamics. The expansion of the incident wave becomes a double summation with products of modified and ordinary Bessel functions [P. L. Marston, J. Acoust. Soc. Am. 111, 2378 (2002)]. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on depth. Some consequences of this imbalance of partial-wave amplitudes are given by modifying previous ray theory for the scattering [P. L. Marston and N. H. Sun, J. Acoust. Soc. Am. 97, 777-783 (1995)]. The exponential dependence of the scattering on the location of a scatterer was previously demonstrated in air [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
Models for Train Passenger Forecasting of Java and Sumatra
NASA Astrophysics Data System (ADS)
Sartono
2017-04-01
People tend to take public transportation to avoid high traffic, especially in Java. In Jakarta, the number of railway passengers is over than the capacity of the train at peak time. This is an opportunity as well as a challenge. If it is managed well then the company can get high profit. Otherwise, it may lead to disaster. This article discusses models for the train passengers, hence, finding the reasonable models to make a prediction overtimes. The Box-Jenkins method is occupied to develop a basic model. Then, this model is compared to models obtained using exponential smoothing method and regression method. The result shows that Holt-Winters model is better to predict for one-month, three-month, and six-month ahead for the passenger in Java. In addition, SARIMA(1,1,0)(2,0,0) is more accurate for nine-month and twelve-month oversee. On the other hand, for Sumatra passenger forecasting, SARIMA(1,1,1)(0,0,2) gives a better approximation for one-month ahead, and ARIMA model is best for three-month ahead prediction. The rest, Trend Seasonal and Liner Model has the least of RMSE to forecast for six-month, nine-month, and twelve-month ahead.
Boundedness and exponential convergence in a chemotaxis model for tumor invasion
NASA Astrophysics Data System (ADS)
Jin, Hai-Yang; Xiang, Tian
2016-12-01
We revisit the following chemotaxis system modeling tumor invasion {ut=Δu-∇ṡ(u∇v),x∈Ω,t>0,vt=Δv+wz,x∈Ω,t>0,wt=-wz,x∈Ω,t>0,zt=Δz-z+u,x∈Ω,t>0, in a smooth bounded domain Ω \\subset {{{R}}n}(n≥slant 1) with homogeneous Neumann boundary and initial conditions. This model was recently proposed by Fujie et al (2014 Adv. Math. Sci. Appl. 24 67-84) as a model for tumor invasion with the role of extracellular matrix incorporated, and was analyzed later by Fujie et al (2016 Discrete Contin. Dyn. Syst. 36 151-69), showing the uniform boundedness and convergence for n≤slant 3 . In this work, we first show that the {{L}∞} -boundedness of the system can be reduced to the boundedness of \\parallel u(\\centerdot,t){{\\parallel}{{L\\frac{n{4}+ɛ}}(Ω )}} for some ɛ >0 alone, and then, for n≥slant 4 , if the initial data \\parallel {{u}0}{{\\parallel}{{L\\frac{n{4}}}}} , \\parallel {{z}0}{{\\parallel}{{L\\frac{n{2}}}}} and \\parallel \
Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis
NASA Astrophysics Data System (ADS)
Mohamed Ismael, Hawa; Vandyck, George Kobina
The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department.
Calegari, Rafael; Fogliatto, Flavio S; Lucini, Filipe R; Neyeloff, Jeruza; Kuchenbecker, Ricardo S; Schaan, Beatriz D
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification.
Forecasting Daily Volume and Acuity of Patients in the Emergency Department
Fogliatto, Flavio S.; Neyeloff, Jeruza; Kuchenbecker, Ricardo S.; Schaan, Beatriz D.
2016-01-01
This study aimed at analyzing the performance of four forecasting models in predicting the demand for medical care in terms of daily visits in an emergency department (ED) that handles high complexity cases, testing the influence of climatic and calendrical factors on demand behavior. We tested different mathematical models to forecast ED daily visits at Hospital de Clínicas de Porto Alegre (HCPA), which is a tertiary care teaching hospital located in Southern Brazil. Model accuracy was evaluated using mean absolute percentage error (MAPE), considering forecasting horizons of 1, 7, 14, 21, and 30 days. The demand time series was stratified according to patient classification using the Manchester Triage System's (MTS) criteria. Models tested were the simple seasonal exponential smoothing (SS), seasonal multiplicative Holt-Winters (SMHW), seasonal autoregressive integrated moving average (SARIMA), and multivariate autoregressive integrated moving average (MSARIMA). Performance of models varied according to patient classification, such that SS was the best choice when all types of patients were jointly considered, and SARIMA was the most accurate for modeling demands of very urgent (VU) and urgent (U) patients. The MSARIMA models taking into account climatic factors did not improve the performance of the SARIMA models, independent of patient classification. PMID:27725842
Statistical Optimality in Multipartite Ranking and Ordinal Regression.
Uematsu, Kazuki; Lee, Yoonkyung
2015-05-01
Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.
Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U
2010-05-01
Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.
Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin
2018-07-01
To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.
Parameterization of photon beam dosimetry for a linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lebron, Sharon; Barraclough, Brendan; Lu, Bo
2016-02-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, includingmore » percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 and 0.5 mm, respectively. The differences in the low gradient regions were 0.20% ± 0.20% (<1% for all) and 0.50% ± 0.35% (<1% for all) for PDDs and profiles, respectively. For S{sub cp} data, all of the absolute differences were less than 0.5%. Conclusions: This novel analytical model with minimum measurement requirements was proved to accurately calculate PDDs, profiles, and S{sub cp} for different field sizes, depths, and energies.« less
[Shock shape representation of sinus heart rate based on cloud model].
Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing
2014-04-01
The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.
NASA Astrophysics Data System (ADS)
Jung, Moonjung; Kim, Dong-Hee
2017-12-01
We investigate the first-order transition in the spin-1 two-dimensional Blume-Capel model in square lattices by revisiting the transfer-matrix method. With large strip widths increased up to the size of 18 sites, we construct the detailed phase coexistence curve which shows excellent quantitative agreement with the recent advanced Monte Carlo results. In the deep first-order area, we observe the exponential system-size scaling of the spectral gap of the transfer matrix from which linearly increasing interfacial tension is deduced with decreasing temperature. We find that the first-order signature at low temperatures is strongly pronounced with much suppressed finite-size influence in the examined thermodynamic properties of entropy, non-zero spin population, and specific heat. It turns out that the jump at the transition becomes increasingly sharp as it goes deep into the first-order area, which is in contrast to the Wang-Landau results where finite-size smoothing gets more severe at lower temperatures.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.
Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John
2015-12-01
Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.
On the Prony series representation of stretched exponential relaxation
NASA Astrophysics Data System (ADS)
Mauro, John C.; Mauro, Yihong Z.
2018-09-01
Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.
NASA Astrophysics Data System (ADS)
Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.
2013-12-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
NASA Astrophysics Data System (ADS)
Dhariwal, Rohit; Bragg, Andrew D.
2018-03-01
In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
McNair, James N; Newbold, J Denis
2012-05-07
Most ecological studies of particle transport in streams that focus on fine particulate organic matter or benthic invertebrates use the Exponential Settling Model (ESM) to characterize the longitudinal pattern of particle settling on the bed. The ESM predicts that if particles are released into a stream, the proportion that have not yet settled will decline exponentially with transport time or distance and will be independent of the release elevation above the bed. To date, no credible basis in fluid mechanics has been established for this model, nor has it been rigorously tested against more-mechanistic alternative models. One alternative is the Local Exchange Model (LEM), which is a stochastic advection-diffusion model that includes both longitudinal and vertical spatial dimensions and is based on classical fluid mechanics. The LEM predicts that particle settling will be non-exponential in the near field but will become exponential in the far field, providing a new theoretical justification for far-field exponential settling that is based on plausible fluid mechanics. We review properties of the ESM and LEM and compare these with available empirical evidence. Most evidence supports the prediction of both models that settling will be exponential in the far field but contradicts the ESM's prediction that a single exponential distribution will hold for all transport times and distances. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Akers, Caleb; Hale, Jacob
2014-11-01
It has been observed that non-coalescence between a droplet and pool of like fluid can be prolonged or inhibited by sustained relative motion between the two fluids. In this study, we quantitatively describe the motion of freely moving droplets that skirt across the surface of a still pool of like fluid. Droplets of different sizes and small Weber number were directed horizontally onto the pool surface. After stabilization of the droplet shape after impact, the droplets smoothly moved across the surface, slowing until coalescence. Using high-speed imaging, we recorded the droplet's trajectory from a top-down view as well as side views both slightly above and below the fluid surface. The droplets' speed is observed to decrease exponentially, with the smaller droplets slowing down at a greater rate. Droplets infused with neutral density micro beads showed that the droplet rolls along the surface of the pool. A qualitative model of this motion is presented.
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
NASA Astrophysics Data System (ADS)
Abarzhi, Snezhana I.; Bhowmich, Aklant K.; Dell, Zachary R.; Pandian, Arun; Stanic, Milos; Stellingwerf, Robert F.; Swisher, Nora C.
2017-10-01
We focus on classical problem of dependence on the initial conditions of the initial growth-rate of strong shocks driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics (SPH) simulations to describe the simulations data with statistical confidence in a broad parameter regime. For given values of the shock strength, fluids' density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio, and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data. National Science Foundation, USA.
Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability
NASA Astrophysics Data System (ADS)
Abarzhi, Snezhana I.; Bhowmich, Aklant K.; Dell, Zachary R.; Pandian, Arun; Stanic, Milos; Stellingwerf, Robert F.; Swisher, Nora C.
2017-11-01
We focus on classical problem of dependence on the initial conditions of the initial growth-rate of strong shocks driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics (SPH) simulations to describe the simulations data with statistical confidence in a broad parameter regime. For given values of the shock strength, fluids' density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio, and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data. National Science Foundation, USA.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.
2016-01-01
We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
The effect of gradational velocities and anisotropy on fault-zone trapped waves
NASA Astrophysics Data System (ADS)
Gulley, A. K.; Eccles, J. D.; Kaipio, J. P.; Malin, P. E.
2017-08-01
Synthetic fault-zone trapped wave (FZTW) dispersion curves and amplitude responses for FL (Love) and FR (Rayleigh) type phases are analysed in transversely isotropic 1-D elastic models. We explore the effects of velocity gradients, anisotropy, source location and mechanism. These experiments suggest: (i) A smooth exponentially decaying velocity model produces a significantly different dispersion curve to that of a three-layer model, with the main difference being that Airy phases are not produced. (ii) The FZTW dispersion and amplitude information of a waveguide with transverse-isotropy depends mostly on the Shear wave velocities in the direction parallel with the fault, particularly if the fault zone to country-rock velocity contrast is small. In this low velocity contrast situation, fully isotropic approximations to a transversely isotropic velocity model can be made. (iii) Fault-aligned fractures and/or bedding in the fault zone that cause transverse-isotropy enhance the amplitude and wave-train length of the FR type FZTW. (iv) Moving the source and/or receiver away from the fault zone removes the higher frequencies first, similar to attenuation. (v) In most physically realistic cases, the radial component of the FR type FZTW is significantly smaller in amplitude than the transverse.
A Stochastic Super-Exponential Growth Model for Population Dynamics
NASA Astrophysics Data System (ADS)
Avila, P.; Rekker, A.
2010-11-01
A super-exponential growth model with environmental noise has been studied analytically. Super-exponential growth rate is a property of dynamical systems exhibiting endogenous nonlinear positive feedback, i.e., of self-reinforcing systems. Environmental noise acts on the growth rate multiplicatively and is assumed to be Gaussian white noise in the Stratonovich interpretation. An analysis of the stochastic super-exponential growth model with derivations of exact analytical formulae for the conditional probability density and the mean value of the population abundance are presented. Interpretations and various applications of the results are discussed.
On High-Order Radiation Boundary Conditions
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1995-01-01
In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.
Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.
2016-01-01
Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785
Exponential model for option prices: Application to the Brazilian market
NASA Astrophysics Data System (ADS)
Ramos, Antônio M. T.; Carvalho, J. A.; Vasconcelos, G. L.
2016-03-01
In this paper we report an empirical analysis of the Ibovespa index of the São Paulo Stock Exchange and its respective option contracts. We compare the empirical data on the Ibovespa options with two option pricing models, namely the standard Black-Scholes model and an empirical model that assumes that the returns are exponentially distributed. It is found that at times near the option expiration date the exponential model performs better than the Black-Scholes model, in the sense that it fits the empirical data better than does the latter model.
Possible stretched exponential parametrization for humidity absorption in polymers.
Hacinliyan, A; Skarlatos, Y; Sahin, G; Atak, K; Aybar, O O
2009-04-01
Polymer thin films have irregular transient current characteristics under constant voltage. In hydrophilic and hydrophobic polymers, the irregularity is also known to depend on the humidity absorbed by the polymer sample. Different stretched exponential models are studied and it is shown that the absorption of humidity as a function of time can be adequately modelled by a class of these stretched exponential absorption models.
Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.
Major, G
1993-07-01
Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.
Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.
Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam
2015-01-01
Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.
Inhomogeneous growth of fluctuations of concentration of inertial particles in channel turbulence
NASA Astrophysics Data System (ADS)
Fouxon, Itzhak; Schmidt, Lukas; Ditlevsen, Peter; van Reeuwijk, Maarten; Holzner, Markus
2018-06-01
We study the growth of concentration fluctuations of weakly inertial particles in the turbulent channel flow starting with a smooth initial distribution. The steady-state concentration is singular and multifractal so the growth describes the increasingly rugged structure of the distribution. We demonstrate that inhomogeneity influences the growth of concentration fluctuations profoundly. For homogeneous turbulence the growth is exponential and is fully determined by Kolmogorov scale eddies.We derive lognormality of the statistics in this case. The growth exponents of the moments are proportional to the sum of Lyapunov exponents, which is quadratic in the small inertia of the particles. In contrast, for inhomogeneous turbulence the growth is linear in inertia. It involves correlations of inertial range and viscous scale eddies that turn the growth into a stretched exponential law with exponent three halves. We demonstrate using direct numerical simulations that the resulting growth rate can differ by orders of magnitude over channel height. This strong variation might have relevance in the planetary boundary layer.
Universality in stochastic exponential growth.
Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R
2014-07-11
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
Universality in Stochastic Exponential Growth
NASA Astrophysics Data System (ADS)
Iyer-Biswas, Srividya; Crooks, Gavin E.; Scherer, Norbert F.; Dinner, Aaron R.
2014-07-01
Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.
An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion
NASA Astrophysics Data System (ADS)
Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo
2018-05-01
The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
Chowell, Gerardo; Viboud, Cécile
2016-10-01
The increasing use of mathematical models for epidemic forecasting has highlighted the importance of designing models that capture the baseline transmission characteristics in order to generate reliable epidemic forecasts. Improved models for epidemic forecasting could be achieved by identifying signature features of epidemic growth, which could inform the design of models of disease spread and reveal important characteristics of the transmission process. In particular, it is often taken for granted that the early growth phase of different growth processes in nature follow early exponential growth dynamics. In the context of infectious disease spread, this assumption is often convenient to describe a transmission process with mass action kinetics using differential equations and generate analytic expressions and estimates of the reproduction number. In this article, we carry out a simulation study to illustrate the impact of incorrectly assuming an exponential-growth model to characterize the early phase (e.g., 3-5 disease generation intervals) of an infectious disease outbreak that follows near-exponential growth dynamics. Specifically, we assess the impact on: 1) goodness of fit, 2) bias on the growth parameter, and 3) the impact on short-term epidemic forecasts. Designing transmission models and statistical approaches that more flexibly capture the profile of epidemic growth could lead to enhanced model fit, improved estimates of key transmission parameters, and more realistic epidemic forecasts.
KAM tori and whiskered invariant tori for non-autonomous systems
NASA Astrophysics Data System (ADS)
Canadell, Marta; de la Llave, Rafael
2015-08-01
We consider non-autonomous dynamical systems which converge to autonomous (or periodic) systems exponentially fast in time. Such systems appear naturally as models of many physical processes affected by external pulses. We introduce definitions of non-autonomous invariant tori and non-autonomous whiskered tori and their invariant manifolds and we prove their persistence under small perturbations, smooth dependence on parameters and several geometric properties (if the systems are Hamiltonian, the tori are Lagrangian manifolds). We note that such definitions are problematic for general time-dependent systems, but we show that they are unambiguous for systems converging exponentially fast to autonomous. The proof of persistence relies only on a standard Implicit Function Theorem in Banach spaces and it does not require that the rotations in the tori are Diophantine nor that the systems we consider preserve any geometric structure. We only require that the autonomous system preserves these objects. In particular, when the autonomous system is integrable, we obtain the persistence of tori with rational rotational. We also discuss fast and efficient algorithms for their computation. The method also applies to infinite dimensional systems which define a good evolution, e.g. PDE's. When the systems considered are Hamiltonian, we show that the time dependent invariant tori are isotropic. Hence, the invariant tori of maximal dimension are Lagrangian manifolds. We also obtain that the (un)stable manifolds of whiskered tori are Lagrangian manifolds. We also include a comparison with the more global theory developed in Blazevski and de la Llave (2011).
Wesson, R.L.
1981-01-01
Quantitative calculations for the effect of a fault creep event on observations of changes in water level in wells provide an approach to the tectonic interpretation of these phenomena. For the pore pressure field associated with an idealized creep event having an exponential displacement versus time curve, an analytic expression has been obtained in terms of exponential-integral functions. The pore pressure versus time curves for observation points near the fault are pulselike; a sharp pressure increase (or decrease, depending on the direction of propagation) is followed by more gradual decay to the normal level after the creep event. The time function of the water level change may be obtained by applying the filter - derived by A.G.Johnson and others to determine the influence of atmospheric pressure on water level - to the analytic pore pressure versus time curves. The resulting water level curves show a fairly rapid increase (or decrease) and then a very gradual return to normal. The results of this analytic model do not reproduce the steplike changes in water level observed by Johnson and others. If the procedure used to obtain the water level from the pore pressure is correct, these results suggest that steplike changes in water level are not produced by smoothly propagating creep events but by creep events that propagate discontinuously, by changes in the bulk properties of the region around the well, or by some other mechanism.-Author
Bajzer, Željko; Gibbons, Simon J.; Coleman, Heidi D.; Linden, David R.
2015-01-01
Noninvasive breath tests for gastric emptying are important techniques for understanding the changes in gastric motility that occur in disease or in response to drugs. Mice are often used as an animal model; however, the gamma variate model currently used for data analysis does not always fit the data appropriately. The aim of this study was to determine appropriate mathematical models to better fit mouse gastric emptying data including when two peaks are present in the gastric emptying curve. We fitted 175 gastric emptying data sets with two standard models (gamma variate and power exponential), with a gamma variate model that includes stretched exponential and with a proposed two-component model. The appropriateness of the fit was assessed by the Akaike Information Criterion. We found that extension of the gamma variate model to include a stretched exponential improves the fit, which allows for a better estimation of T1/2 and Tlag. When two distinct peaks in gastric emptying are present, a two-component model is required for the most appropriate fit. We conclude that use of a stretched exponential gamma variate model and when appropriate a two-component model will result in a better estimate of physiologically relevant parameters when analyzing mouse gastric emptying data. PMID:26045615
Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.
Capozziello, S; Lambiase, G; Saridakis, E N
2017-01-01
We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.
A long-term earthquake rate model for the central and eastern United States from smoothed seismicity
Moschetti, Morgan P.
2015-01-01
I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Modeling of magnitude distributions by the generalized truncated exponential distribution
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-01-01
The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.
Lagrangian predictability characteristics of an Ocean Model
NASA Astrophysics Data System (ADS)
Lacorata, Guglielmo; Palatella, Luigi; Santoleri, Rosalia
2014-11-01
The Mediterranean Forecasting System (MFS) Ocean Model, provided by INGV, has been chosen as case study to analyze Lagrangian trajectory predictability by means of a dynamical systems approach. To this regard, numerical trajectories are tested against a large amount of Mediterranean drifter data, used as sample of the actual tracer dynamics across the sea. The separation rate of a trajectory pair is measured by computing the Finite-Scale Lyapunov Exponent (FSLE) of first and second kind. An additional kinematic Lagrangian model (KLM), suitably treated to avoid "sweeping"-related problems, has been nested into the MFS in order to recover, in a statistical sense, the velocity field contributions to pair particle dispersion, at mesoscale level, smoothed out by finite resolution effects. Some of the results emerging from this work are: (a) drifter pair dispersion displays Richardson's turbulent diffusion inside the [10-100] km range, while numerical simulations of MFS alone (i.e., without subgrid model) indicate exponential separation; (b) adding the subgrid model, model pair dispersion gets very close to observed data, indicating that KLM is effective in filling the energy "mesoscale gap" present in MFS velocity fields; (c) there exists a threshold size beyond which pair dispersion becomes weakly sensitive to the difference between model and "real" dynamics; (d) the whole methodology here presented can be used to quantify model errors and validate numerical current fields, as far as forecasts of Lagrangian dispersion are concerned.
Inflationary dynamics with a smooth slow-roll to constant-roll era transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odintsov, S.D.; Oikonomou, V.K., E-mail: odintsov@ieec.uab.es, E-mail: v.k.oikonomou1979@gmail.com
In this paper we investigate the implications of having a varying second slow-roll index on the canonical scalar field inflationary dynamics. We shall be interested in cases that the second slow-roll can take small values and correspondingly large values, for limiting cases of the function that quantifies the variation of the second slow-roll index. As we demonstrate, this can naturally introduce a smooth transition between slow-roll and constant-roll eras. We discuss the theoretical implications of the mechanism we introduce and we use various illustrative examples in order to better understand the new features that the varying second slow-roll index introduces.more » In the examples we will present, the second slow-roll index has exponential dependence on the scalar field, and in one of these cases, the slow-roll era corresponds to a type of α-attractor inflation. Finally, we briefly discuss how the combination of slow-roll and constant-roll may lead to non-Gaussianities in the primordial perturbations.« less
Application of Holt exponential smoothing and ARIMA method for data population in West Java
NASA Astrophysics Data System (ADS)
Supriatna, A.; Susanti, D.; Hertini, E.
2017-01-01
One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.
Radiofrequency in cosmetic dermatology.
Beasley, Karen L; Weiss, Robert A
2014-01-01
The demand for noninvasive methods of facial and body rejuvenation has experienced exponential growth over the last decade. There is a particular interest in safe and effective ways to decrease skin laxity and smooth irregular body contours and texture without downtime. These noninvasive treatments are being sought after because less time for recovery means less time lost from work and social endeavors. Radiofrequency (RF) treatments are traditionally titrated to be nonablative and are optimal for those wishing to avoid recovery time. Not only is there minimal recovery but also a high level of safety with aesthetic RF treatments. Copyright © 2014 Elsevier Inc. All rights reserved.
Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems
NASA Technical Reports Server (NTRS)
Brandt, A.
1978-01-01
The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Investigation of non-Gaussian effects in the Brazilian option market
NASA Astrophysics Data System (ADS)
Sosa-Correa, William O.; Ramos, Antônio M. T.; Vasconcelos, Giovani L.
2018-04-01
An empirical study of the Brazilian option market is presented in light of three option pricing models, namely the Black-Scholes model, the exponential model, and a model based on a power law distribution, the so-called q-Gaussian distribution or Tsallis distribution. It is found that the q-Gaussian model performs better than the Black-Scholes model in about one third of the option chains analyzed. But among these cases, the exponential model performs better than the q-Gaussian model in 75% of the time. The superiority of the exponential model over the q-Gaussian model is particularly impressive for options close to the expiration date, where its success rate rises above ninety percent.
Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni
This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.
NASA Astrophysics Data System (ADS)
Barnett, Alex H.; Nelson, Bradley J.; Mahoney, J. Matthew
2015-09-01
We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve, the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (Δ + E +x2) u (x1 ,x2) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 102 nodes, with an effort that is independent of the frequency parameter E. By combining with a high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.
Search for gamma-ray spectral modulations in Galactic pulsars
NASA Astrophysics Data System (ADS)
Majumdar, Jhilik; Calore, Francesca; Horns, Dieter
2018-04-01
Well-motivated extensions of the standard model predict ultra-light and fundamental pseudo-scalar particles (e.g., axions or axion-like particles: ALPs). Similarly to the Primakoff-effect for axions, ALPs can mix with photons and consequently be searched for in laboratory experiments and with astrophysical observations. Here, we search for energy-dependent modulations of high-energy gamma-ray spectra that are tell-tale signatures of photon-ALPs mixing. To this end, we analyze the data recorded with the Fermi-LAT from Galactic pulsars selected to have a line of sight crossing spiral arms at a large pitch angle. The large-scale Galactic magnetic field traces the shape of spiral arms, such that a sizable photon-ALP conversion probability is expected for the sources considered. For the nearby Vela pulsar, the energy spectrum is well described by a smooth model spectrum (a power-law with a sub-exponential cut-off) while for the six selected Galactic pulsars, a common fit of the ALPs parameters improves the goodness of fit in comparison to a smooth model spectrum with a significance of 4.6 σ. We determine the most-likely values for mass ma and coupling gaγγ to be ma=(3.6‑0.2 stat.+0.5 stat.± 0.2syst. ) neV and gaγγ=(2.3‑0.4stat.+0.3 stat.± 0.4syst.)× 10‑10 GeV‑1. In the error budget, we consider instrumental effects, scaling of the adopted Galactic magnetic field model (± 20 %), and uncertainties on the distance of individual sources. The best-fit parameters are by a factor of ≈ 3 larger than the current best limit on solar ALPs generation obtained with the CAST helioscope, although known modifications of the photon-ALP mixing in the high density solar environment could provide a plausible explanation for the apparent tension between the helioscope bound and the indication for photon-ALPs mixing reported here.
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
[Hyperspectral Remote Sensing Estimation Models for Pasture Quality].
Ma, Wei-wei; Gong, Cai-lan; Hu, Yong; Wei, Yong-lin; Li, Long; Liu, Feng-yi; Meng, Peng
2015-10-01
Crude protein (CP), crude fat (CFA) and crude fiber (CFI) are key indicators for evaluation of the quality and feeding value of pasture. Hence, identification of these biological contents is an essential practice for animal husbandry. As current approaches to pasture quality estimation are time-consuming and costly, and even generate hazardous waste, a real-time and non-destructive method is therefore developed in this study using pasture canopy hyperspectral data. A field campaign was carried out in August 2013 around Qinghai Lake in order to obtain field spectral properties of 19 types of natural pasture using the ASD Field Spec 3, a field spectrometer that works in the optical region (350-2 500 nm) of the electromagnetic spectrum. In additional to the spectral data, pasture samples were also collected from the field and examined in laboratory to measure the relative concentration of CP (%), CFA (%) and CFI (%). After spectral denoising and smoothing, the relationship of pasture quality parameters with the reflectance spectrum, the first derivatives of reflectance (FDR), band ratio and the wavelet coefficients (WCs) was analyzed respectively. The concentration of CP, CFA and CFI of pasture was found closely correlated with FDR with wavebands centered at 424, 1 668, and 918 nm as well as with the low-scale (scale = 2, 4) Morlet, Coiflets and Gassian WCs. Accordingly, the linear, exponential, and polynomial equations between each pasture variable and FDR or WCs were developed. Validation of the developed equations indicated that the polynomial model with an independent variable of Coiflets WCs (scale = 4, wavelength =1 209 nm), the polynomial model with an independent variable of FDR, and the exponential model with an independent variable of FDR were the optimal model for prediction of concentration of CP, CFA and CFI of pasture, respectively. The R2 of the pasture quality estimation models was between 0.646 and 0.762 at the 0.01 significance level. Results suggest that the first derivatives or the wavelet coefficients of hyperspectral reflectance in visible and near-infrared regions can be used for pasture quality estimation, and that it will provide a basis for real-time prediction of pasture quality using remote sensing techniques.
NASA Astrophysics Data System (ADS)
Allen, Linda J. S.
2016-09-01
Dr. Chowell and colleagues emphasize the importance of considering a variety of modeling approaches to characterize the growth of an epidemic during the early stages [1]. A fit of data from the 2009 H1N1 influenza pandemic and the 2014-2015 Ebola outbreak to models indicates sub-exponential growth, in contrast to the classic, homogeneous-mixing SIR model with exponential growth. With incidence rate βSI / N and S approximately equal to the total population size N, the number of new infections in an SIR epidemic model grows exponentially as in the differential equation,
Nishiye, E; Somlyo, A V; Török, K; Somlyo, A P
1993-01-01
1. The effects of MgADP on cross-bridge kinetics were investigated using laser flash photolysis of caged ATP (P3-1(2-nitrophenyl) ethyladenosine 5'-triphosphate), in guinea-pig portal vein smooth muscle permeabilized with Staphylococcus aureus alpha-toxin. Isometric tension and in-phase stiffness transitions from rigor state were monitored upon photolysis of caged ATP. The estimated concentration of ATP released from caged ATP by high-pressure liquid chromatography (HPLC) was 1.3 mM. 2. The time course of relaxation initiated by photolysis of caged ATP in the absence of Ca2+ was well fitted during the initial 200 ms by two exponential functions with time constants of, respectively, tau 1 = 34 ms and tau 2 = 1.2 s and relative amplitudes of 0.14 and 0.86. Multiple exponential functions were needed to fit longer intervals; the half-time of the overall relaxation was 0.8 s. The second order rate constant for cross-bridge detachment by ATP, estimated from the rate of initial relaxation, was 0.4-2.3 x 10(4) M-1 s-1. 3. MgADP dose dependently reduced both the relative amplitude of the first component and the rate constant of the second component of relaxation. Conversely, treatment of muscles with apyrase, to deplete endogenous ADP, increased the relative amplitude of the first component. In the presence of MgADP, in-phase stiffness decreased during force maintenance, suggesting that the force per cross-bridge increased. The apparent dissociation constant (Kd) of MgADP for the cross-bridge binding site, estimated from its concentration-dependent effect on the relative amplitude of the first component, was 1.3 microM. This affinity is much higher than the previously reported values (50-300 microM for smooth muscle; 18-400 microM for skeletal muscle; 7-10 microM for cardiac muscle). It is possible that the high affinity reflects the properties of a state generated during the co-operative reattachment cycle, rather than that of the rigor bridge. 4. The rate constant of MgADP release from cross-bridges, estimated from its concentration-dependent effect on the rate constant of the second (tau 2) component, was 0.35-7.7 s-1. To the extent that reattachment of cross-bridges could slow relaxation even during the initial 200 ms, this rate constant may be an underestimate. 5. Inorganic phosphate (Pi, 30 mM) did not affect the rate of relaxation during the initial approximately 50 ms, but accelerated the slower phase of relaxation, consistent with a cyclic cross-bridge model in which Pi increases the proportion of cross-bridges in detached ('weakly bound') states.(ABSTRACT TRUNCATED AT 400 WORDS) Images Fig. 1 PMID:8487195
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-12-07
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.
A Simulation To Model Exponential Growth.
ERIC Educational Resources Information Center
Appelbaum, Elizabeth Berman
2000-01-01
Describes a simulation using dice-tossing students in a population cluster to model the growth of cancer cells. This growth is recorded in a scatterplot and compared to an exponential function graph. (KHR)
Attractors of three-dimensional fast-rotating Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Trahe, Markus
The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.
NASA Astrophysics Data System (ADS)
Jeong, Chan-Yong; Kim, Hee-Joong; Hong, Sae-Young; Song, Sang-Hun; Kwon, Hyuck-In
2017-08-01
In this study, we show that the two-stage unified stretched-exponential model can more exactly describe the time-dependence of threshold voltage shift (ΔV TH) under long-term positive-bias-stresses compared to the traditional stretched-exponential model in amorphous indium-gallium-zinc oxide (a-IGZO) thin-film transistors (TFTs). ΔV TH is mainly dominated by electron trapping at short stress times, and the contribution of trap state generation becomes significant with an increase in the stress time. The two-stage unified stretched-exponential model can provide useful information not only for evaluating the long-term electrical stability and lifetime of the a-IGZO TFT but also for understanding the stress-induced degradation mechanism in a-IGZO TFTs.
State-space forecasting of Schistosoma haematobium time-series in Niono, Mali.
Medina, Daniel C; Findley, Sally E; Doumbia, Seydou
2008-08-13
Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.-which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively-is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. In this longitudinal retrospective (01/1996-06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state-space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. The exponential smoothing state-space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium-induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel.
State–Space Forecasting of Schistosoma haematobium Time-Series in Niono, Mali
Medina, Daniel C.; Findley, Sally E.; Doumbia, Seydou
2008-01-01
Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.—which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively—is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state–space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. Conclusions/Significance The exponential smoothing state–space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium–induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel. PMID:18698361
NASA Astrophysics Data System (ADS)
Pasari, S.; Kundu, D.; Dikshit, O.
2012-12-01
Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.
Theory for Transitions Between Exponential and Stationary Phases: Universal Laws for Lag Time
NASA Astrophysics Data System (ADS)
Himeoka, Yusuke; Kaneko, Kunihiko
2017-04-01
The quantitative characterization of bacterial growth has attracted substantial attention since Monod's pioneering study. Theoretical and experimental works have uncovered several laws for describing the exponential growth phase, in which the number of cells grows exponentially. However, microorganism growth also exhibits lag, stationary, and death phases under starvation conditions, in which cell growth is highly suppressed, for which quantitative laws or theories are markedly underdeveloped. In fact, the models commonly adopted for the exponential phase that consist of autocatalytic chemical components, including ribosomes, can only show exponential growth or decay in a population; thus, phases that halt growth are not realized. Here, we propose a simple, coarse-grained cell model that includes an extra class of macromolecular components in addition to the autocatalytic active components that facilitate cellular growth. These extra components form a complex with the active components to inhibit the catalytic process. Depending on the nutrient condition, the model exhibits typical transitions among the lag, exponential, stationary, and death phases. Furthermore, the lag time needed for growth recovery after starvation follows the square root of the starvation time and is inversely related to the maximal growth rate. This is in agreement with experimental observations, in which the length of time of cell starvation is memorized in the slow accumulation of molecules. Moreover, the lag time distributed among cells is skewed with a long time tail. If the starvation time is longer, an exponential tail appears, which is also consistent with experimental data. Our theory further predicts a strong dependence of lag time on the speed of substrate depletion, which can be tested experimentally. The present model and theoretical analysis provide universal growth laws beyond the exponential phase, offering insight into how cells halt growth without entering the death phase.
Zeng, Qianglin; Li, Dandan; Huang, Gui; Xia, Jin; Wang, Xiaoming; Zhang, Yamei; Tang, Wanping; Zhou, Hui
2016-08-31
Short-term forecast of pertussis incidence is helpful for advanced warning and planning resource needs for future epidemics. By utilizing the Auto-Regressive Integrated Moving Average (ARIMA) model and Exponential Smoothing (ETS) model as alterative models with R software, this paper analyzed data from Chinese Center for Disease Control and Prevention (China CDC) between January 2005 and June 2016. The ARIMA (0,1,0)(1,1,1)12 model (AICc = 1342.2 BIC = 1350.3) was selected as the best performing ARIMA model and the ETS (M,N,M) model (AICc = 1678.6, BIC = 1715.4) was selected as the best performing ETS model, and the ETS (M,N,M) model with the minimum RMSE was finally selected for in-sample-simulation and out-of-sample forecasting. Descriptive statistics showed that the reported number of pertussis cases by China CDC increased by 66.20% from 2005 (4058 cases) to 2015 (6744 cases). According to Hodrick-Prescott filter, there was an apparent cyclicity and seasonality in the pertussis reports. In out of sample forecasting, the model forecasted a relatively high incidence cases in 2016, which predicates an increasing risk of ongoing pertussis resurgence in the near future. In this regard, the ETS model would be a useful tool in simulating and forecasting the incidence of pertussis, and helping decision makers to take efficient decisions based on the advanced warning of disease incidence.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
State of charge modeling of lithium-ion batteries using dual exponential functions
NASA Astrophysics Data System (ADS)
Kuo, Ting-Jung; Lee, Kung-Yen; Huang, Chien-Kang; Chen, Jau-Horng; Chiu, Wei-Li; Huang, Chih-Fang; Wu, Shuen-De
2016-05-01
A mathematical model is developed by fitting the discharging curve of LiFePO4 batteries and used to investigate the relationship between the state of charge and the closed-circuit voltage. The proposed mathematical model consists of dual exponential terms and a constant term which can fit the characteristics of dual equivalent RC circuits closely, representing a LiFePO4 battery. One exponential term presents the stable discharging behavior and the other one presents the unstable discharging behavior and the constant term presents the cut-off voltage.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
Nonparametric Model of Smooth Muscle Force Production During Electrical Stimulation.
Cole, Marc; Eikenberry, Steffen; Kato, Takahide; Sandler, Roman A; Yamashiro, Stanley M; Marmarelis, Vasilis Z
2017-03-01
A nonparametric model of smooth muscle tension response to electrical stimulation was estimated using the Laguerre expansion technique of nonlinear system kernel estimation. The experimental data consisted of force responses of smooth muscle to energy-matched alternating single pulse and burst current stimuli. The burst stimuli led to at least a 10-fold increase in peak force in smooth muscle from Mytilus edulis, despite the constant energy constraint. A linear model did not fit the data. However, a second-order model fit the data accurately, so the higher-order models were not required to fit the data. Results showed that smooth muscle force response is not linearly related to the stimulation power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A
2017-03-21
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.
Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.
2017-03-16
First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less
2013-01-01
Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648
Modeling the dispersion effects of contractile fibers in smooth muscles
NASA Astrophysics Data System (ADS)
Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A.
2010-12-01
Micro-structurally based models for smooth muscle contraction are crucial for a better understanding of pathological conditions such as atherosclerosis, incontinence and asthma. It is meaningful that models consider the underlying mechanical structure and the biochemical activation. Hence, a simple mechanochemical model is proposed that includes the dispersion of the orientation of smooth muscle myofilaments and that is capable to capture available experimental data on smooth muscle contraction. This allows a refined study of the effects of myofilament dispersion on the smooth muscle contraction. A classical biochemical model is used to describe the cross-bridge interactions with the thin filament in smooth muscles in which calcium-dependent myosin phosphorylation is the only regulatory mechanism. A novel mechanical model considers the dispersion of the contractile fiber orientations in smooth muscle cells by means of a strain-energy function in terms of one dispersion parameter. All model parameters have a biophysical meaning and may be estimated through comparisons with experimental data. The contraction of the middle layer of a carotid artery is studied numerically. Using a tube the relationships between the internal pressure and the stretches are investigated as functions of the dispersion parameter, which implies a strong influence of the orientation of smooth muscle myofilaments on the contraction response. It is straightforward to implement this model in a finite element code to better analyze more complex boundary-value problems.
A Simulation of the ECSS Help Desk with the Erlang a Model
2011-03-01
a popular distribution is the exponential distribution as shown in Figure 3. Figure 3: Exponential Distribution ( Bourke , 2001) Exponential...System Sciences, Vol 8, 235B. Bourke , P. (2001, January). Miscellaneous Functions. Retrieved January 22, 2011, from http://local.wasp.uwa.edu.au
Method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1972-01-01
Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.
Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan
2016-01-01
Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
A model that integrates eye velocity commands to keep track of smooth eye displacements.
Blohm, Gunnar; Optican, Lance M; Lefèvre, Philippe
2006-08-01
Past results have reported conflicting findings on the oculomotor system's ability to keep track of smooth eye movements in darkness. Whereas some results indicate that saccades cannot compensate for smooth eye displacements, others report that memory-guided saccades during smooth pursuit are spatially correct. Recently, it was shown that the amount of time before the saccade made a difference: short-latency saccades were retinotopically coded, whereas long-latency saccades were spatially coded. Here, we propose a model of the saccadic system that can explain the available experimental data. The novel part of this model consists of a delayed integration of efferent smooth eye velocity commands. Two alternative physiologically realistic neural mechanisms for this integration stage are proposed. Model simulations accurately reproduced prior findings. Thus, this model reconciles the earlier contradictory reports from the literature about compensation for smooth eye movements before saccades because it involves a slow integration process.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs
NASA Astrophysics Data System (ADS)
Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.
2018-04-01
Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.
NASA Astrophysics Data System (ADS)
Chestler, Shelley
This dissertation seeks to further understand the LFE source process, the role LFEs play in generating slow slip, and the utility of using LFEs to examine plate interface structure. The work involves the creation and investigation of a 2-year-long catalog of low-frequency earthquakes beneath the Olympic Peninsula, Washington. In the first chapter, we calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, WA. LFE moments range from 1.4x1010- 1.9x1012 N-m (M W=0.7-2.1). While regular earthquakes follow a power-law moment-frequency distribution with a b-value near 1 (the number of events increases by a factor of 10 for each unit increase in MW), we find that while for large LFEs the b-value is ˜6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0x1011 N-m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, sub-patch diameters, stress drops, and slip rates for LFEs during ETS events. We allow for LFEs to rupture smaller sub-patches within the LFE family patch. Models with 1-10 sub-patches produce slips of 0.1-1 mm, sub-patch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one sub-patch is often assumed, we believe 3-10 sub-patches are more likely. In the second chapter, using high-resolution relative low-frequency earthquake (LFE) locations, we calculate the patch areas (Ap) of LFE families. During Episodic Tremor and Slip (ETS) events, we define AT as the area that slips during LFEs and ST as the total amount of summed LFE slip. Using observed and calculated values for AP, AT and ST we evaluate two end- member models for LFE slip within an LFE family patch (models 2 and 3 from chapter 1). In the ductile matrix model (model 3), LFEs produce 100% of the observed ETS slip (SETS) in distinct sub-patches (i.e., AT AP). In the connected patch model (model 2), AT =AP, but ST<< SETS. LFEs cluster into 45 LFE families. Spatial gaps (˜10-20 km) between LFE family clusters and smaller gaps within LFE family clusters serve as evidence that LFE slip is heterogeneous on multiple spatial scales. We find that LFE slip only accounts for ˜0.2% of the slip within the slow slip zone. There are downdip trends in the characteristic (mean) moment and in the number of LFEs during both ETS events (only) and the entire ETS cycle (Mc,ETS and NT,ETS and Mc,all and NT,all respectively). During ETS, Mc decreases with downdip distance but NT does not change. Over the entire ETS cycle, Mc decreases with downdip distance, but NT increases. These observations indicate that downdip LFE slip occurs through a larger number (800-1200) of small LFEs, while updip LFE slip occurs primarily during ETS events through a smaller number (200-600) of larger LFEs. This could indicate that the plate interface is stronger and has a higher stress threshold updip. In the third chapter, we use high-precision, relative low-frequency earthquake (LFE) locations for LFEs beneath the Olympic Peninsula, WA to constrain the depth, geometry, and thickness of the plate interface. LFE depths correspond most closely with the McCrory et al. (2012) plate model, but vary from that smooth model along strike. The latter observation indicates that the actual plate interface is notably rougher and more complex than smooth plate models. Our LFEs lie directly above low-velocity zone (LVZ) and approximately 5 km above intraslab earthquakes. This supports the proposal of Bostock (2013), that the LVZ comprises the upper oceanic crust and that fluids are responsible for the velocity contrast across the LVZ and likely play a large role in generating slow slip and LFEs. Within each of our LFE families, LFEs group into tight clusters around the family centroid. The width of these clusters in the depth direction, which is an indicator of the thickness of slow slip deformation on the plate interface, is 130 to 340 meters.
Dynamic stability of passive dynamic walking on an irregular surface.
Su, Jimmy Li-Shin; Dingwell, Jonathan B
2007-12-01
Falls that occur during walking are a significant health problem. One of the greatest impediments to solve this problem is that there is no single obviously "correct" way to quantify walking stability. While many people use variability as a proxy for stability, measures of variability do not quantify how the locomotor system responds to perturbations. The purpose of this study was to determine how changes in walking surface variability affect changes in both locomotor variability and stability. We modified an irreducibly simple model of walking to apply random perturbations that simulated walking over an irregular surface. Because the model's global basin of attraction remained fixed, increasing the amplitude of the applied perturbations directly increased the risk of falling in the model. We generated ten simulations of 300 consecutive strides of walking at each of six perturbation amplitudes ranging from zero (i.e., a smooth continuous surface) up to the maximum level the model could tolerate without falling over. Orbital stability defines how a system responds to small (i.e., "local") perturbations from one cycle to the next and was quantified by calculating the maximum Floquet multipliers for the model. Local stability defines how a system responds to similar perturbations in real time and was quantified by calculating short-term and long-term local exponential rates of divergence for the model. As perturbation amplitudes increased, no changes were seen in orbital stability (r(2)=2.43%; p=0.280) or long-term local instability (r(2)=1.0%; p=0.441). These measures essentially reflected the fact that the model never actually "fell" during any of our simulations. Conversely, the variability of the walker's kinematics increased exponentially (r(2)>or=99.6%; p<0.001) and short-term local instability increased linearly (r(2)=88.1%; p<0.001). These measures thus predicted the increased risk of falling exhibited by the model. For all simulated conditions, the walker remained orbitally stable, while exhibiting substantial local instability. This was because very small initial perturbations diverged away from the limit cycle, while larger initial perturbations converged toward the limit cycle. These results provide insight into how these different proposed measures of walking stability are related to each other and to risk of falling.
Mathematical Modeling of Extinction of Inhomogeneous Populations
Karev, G.P.; Kareva, I.
2016-01-01
Mathematical models of population extinction have a variety of applications in such areas as ecology, paleontology and conservation biology. Here we propose and investigate two types of sub-exponential models of population extinction. Unlike the more traditional exponential models, the life duration of sub-exponential models is finite. In the first model, the population is assumed to be composed clones that are independent from each other. In the second model, we assume that the size of the population as a whole decreases according to the sub-exponential equation. We then investigate the “unobserved heterogeneity”, i.e. the underlying inhomogeneous population model, and calculate the distribution of frequencies of clones for both models. We show that the dynamics of frequencies in the first model is governed by the principle of minimum of Tsallis information loss. In the second model, the notion of “internal population time” is proposed; with respect to the internal time, the dynamics of frequencies is governed by the principle of minimum of Shannon information loss. The results of this analysis show that the principle of minimum of information loss is the underlying law for the evolution of a broad class of models of population extinction. Finally, we propose a possible application of this modeling framework to mechanisms underlying time perception. PMID:27090117
USDA-ARS?s Scientific Manuscript database
A new mechanistic growth model was developed to describe microbial growth under isothermal conditions. The new mathematical model was derived from the basic observation of bacterial growth that may include lag, exponential, and stationary phases. With this model, the lag phase duration and exponen...
Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-01-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095
NASA Astrophysics Data System (ADS)
Magin, Richard L.; Li, Weiguo; Pilar Velasco, M.; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.
2011-06-01
We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena ( T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter ( α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for micro-structural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues.
Teaching the Verhulst Model: A Teaching Experiment in Covariational Reasoning and Exponential Growth
ERIC Educational Resources Information Center
Castillo-Garsow, Carlos
2010-01-01
Both Thompson and the duo of Confrey and Smith describe how students might be taught to build "ways of thinking" about exponential behavior by coordinating the covariation of two changing quantities, however, these authors build exponential behavior from different meanings of covariation. Confrey and Smith advocate beginning with discrete additive…
Review of "Going Exponential: Growing the Charter School Sector's Best"
ERIC Educational Resources Information Center
Garcia, David
2011-01-01
This Progressive Policy Institute report argues that charter schools should be expanded rapidly and exponentially. Citing exponential growth organizations, such as Starbucks and Apple, as well as the rapid growth of molds, viruses and cancers, the report advocates for similar growth models for charter schools. However, there is no explanation of…
A comparison between block and smooth modeling in finite element simulations of tDCS*
Indahlastari, Aprinda; Sadleir, Rosalind J.
2018-01-01
Current density distributions in five selected structures, namely, anterior superior temporal gyrus (ASTG), hippocampus (HIP), inferior frontal gyrus (IFG), occipital lobe (OCC) and pre-central gyrus (PRC) were investigated as part of a comparison between electrostatic finite element models constructed directly from MRI-resolution data (block models), and smoothed tetrahedral finite element models (smooth models). Three electrode configurations were applied, mimicking different tDCS therapies. Smooth model simulations were found to require three times longer to complete. The percentage differences between mean and median current densities of each model type in arbitrarily chosen brain structures ranged from −33.33–48.08%. No clear relationship was found between structure volumes and current density differences between the two model types. Tissue regions nearby the electrodes demonstrated the least percentage differences between block and smooth models. Therefore, block models may be adequate to predict current density values in cortical regions presumed targeted by tDCS. PMID:26737023
Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius
2012-01-01
Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.
SMERGE: A multi-decadal root-zone soil moisture product for CONUS
NASA Astrophysics Data System (ADS)
Crow, W. T.; Dong, J.; Tobin, K. J.; Torres, R.
2017-12-01
Multi-decadal root-zone soil moisture products are of value for a range of water resource and climate applications. The NASA-funded root-zone soil moisture merging project (SMERGE) seeks to develop such products through the optimal merging of land surface model predictions with surface soil moisture retrievals acquired from multi-sensor remote sensing products. This presentation will describe the creation and validation of a daily, multi-decadal (1979-2015), vertically-integrated (both surface to 40 cm and surface to 100 cm), 0.125-degree root-zone product over the contiguous United States (CONUS). The modeling backbone of the system is based on hourly root-zone soil moisture simulations generated by the Noah model (v3.2) operating within the North American Land Data Assimilation System (NLDAS-2). Remotely-sensed surface soil moisture retrievals are taken from the multi-sensor European Space Agency Climate Change Initiative soil moisture data set (ESA CCI SM). In particular, the talk will detail: 1) the exponential smoothing approach used to convert surface ESA CCI SM retrievals into root-zone soil moisture estimates, 2) the averaging technique applied to merge (temporally-sporadic) remotely-sensed with (continuous) NLDAS-2 land surface model estimates of root-zone soil moisture into the unified SMERGE product, and 3) the validation of the SMERGE product using long-term, ground-based soil moisture datasets available within CONUS.
Image segmentation on adaptive edge-preserving smoothing
NASA Astrophysics Data System (ADS)
He, Kun; Wang, Dan; Zheng, Xiuqing
2016-09-01
Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.
McKellar, Robin C
2008-01-15
Developing accurate mathematical models to describe the pre-exponential lag phase in food-borne pathogens presents a considerable challenge to food microbiologists. While the growth rate is influenced by current environmental conditions, the lag phase is affected in addition by the history of the inoculum. A deeper understanding of physiological changes taking place during the lag phase would improve accuracy of models, and in earlier studies a strain of Pseudomonas fluorescens containing the Tn7-luxCDABE gene cassette regulated by the rRNA promoter rrnB P2 was used to measure the influence of starvation, growth temperature and sub-lethal heating on promoter expression and subsequent growth. The present study expands the models developed earlier to include a model which describes the change from exponential to linear increase in promoter expression with time when the exponential phase of growth commences. A two-phase linear model with Poisson weighting was used to estimate the lag (LPDLin) and the rate (RLin) for this linear increase in bioluminescence. The Spearman rank correlation coefficient (r=0.830) between the LPDLin and the growth lag phase (LPDOD) was extremely significant (P
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
NASA Astrophysics Data System (ADS)
Lengline, O.; Marsan, D.; Got, J.; Pinel, V.
2007-12-01
The evolution of the seismicity at three basaltic volcanoes (Kilauea, Mauna-Loa and Piton de la Fournaise) is analysed during phases of magma accumulation. We show that the VT seismicity during these time-periods is characterized by an exponential increase at long-time scale (years). Such an exponential acceleration can be explained by a model of seismicity forced by the replenishment of a magmatic reservoir. The increase in stress in the edifice caused by this replenishment is modeled. This stress history leads to a cumulative number of damage, ie VT earthquakes, following the same exponential increase as found for seismicity. A long-term seismicity precursor is thus detected at basaltic volcanoes. Although this precursory signal is not able to predict the onset times of futures eruptions (as no diverging point is present in the model), it may help mitigating volcanic hazards.
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
Vadeby, Anna; Forsman, Åsa
2017-06-01
This study investigated the effect of applying two aggregated models (the Power model and the Exponential model) to individual vehicle speeds instead of mean speeds. This is of particular interest when the measure introduced affects different parts of the speed distribution differently. The aim was to examine how the estimated overall risk was affected when assuming the models are valid on an individual vehicle level. Speed data from two applications of speed measurements were used in the study: an evaluation of movable speed cameras and a national evaluation of new speed limits in Sweden. The results showed that when applied on individual vehicle speed level compared with aggregated level, there was essentially no difference between these for the Power model in the case of injury accidents. However, for fatalities the difference was greater, especially for roads with new cameras where those driving fastest reduced their speed the most. For the case with new speed limits, the individual approach estimated a somewhat smaller effect, reflecting that changes in the 15th percentile (P15) were somewhat larger than changes in P85 in this case. For the Exponential model there was also a clear, although small, difference between applying the model to mean speed changes and individual vehicle speed changes when speed cameras were used. This applied both for injury accidents and fatalities. There were also larger effects for the Exponential model than for the Power model, especially for injury accidents. In conclusion, applying the Power or Exponential model to individual vehicle speeds is an alternative that provides reasonable results in relation to the original Power and Exponential models, but more research is needed to clarify the shape of the individual risk curve. It is not surprising that the impact on severe traffic crashes was larger in situations where those driving fastest reduced their speed the most. Further investigations on use of the Power and/or the Exponential model at individual vehicle level would require more data on the individual level from a range of international studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barker, Andrew T.; Gelever, Stephan A.; Lee, Chak S.
2017-12-12
smoothG is a collection of parallel C++ classes/functions that algebraically constructs reduced models of different resolutions from a given high-fidelity graph model. In addition, smoothG also provides efficient linear solvers for the reduced models. Other than pure graph problem, the software finds its application in subsurface flow and power grid simulations in which graph Laplacians are found
Automatic selection of arterial input function using tri-exponential models
NASA Astrophysics Data System (ADS)
Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David
2009-02-01
Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.
Comparison of kinetic model for biogas production from corn cob
NASA Astrophysics Data System (ADS)
Shitophyta, L. M.; Maryudi
2018-04-01
Energy demand increases every day, while the energy source especially fossil energy depletes increasingly. One of the solutions to overcome the energy depletion is to provide renewable energies such as biogas. Biogas can be generated by corn cob and food waste. In this study, biogas production was carried out by solid-state anaerobic digestion. The steps of biogas production were the preparation of feedstock, the solid-state anaerobic digestion, and the measurement of biogas volume. This study was conducted on TS content of 20%, 22%, and 24%. The aim of this research was to compare kinetic models of biogas production from corn cob and food waste as a co-digestion using the linear, exponential equation, and first-kinetic models. The result showed that the exponential equation had a better correlation than the linear equation on the ascending graph of biogas production. On the contrary, the linear equation had a better correlation than the exponential equation on the descending graph of biogas production. The correlation values on the first-kinetic model had the smallest value compared to the linear and exponential models.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
NASA Astrophysics Data System (ADS)
Ma, Xiao; Zheng, Wei-Fan; Jiang, Bao-Shan; Zhang, Ji-Ye
2016-10-01
With the development of traffic systems, some issues such as traffic jams become more and more serious. Efficient traffic flow theory is needed to guide the overall controlling, organizing and management of traffic systems. On the basis of the cellular automata model and the traffic flow model with look-ahead potential, a new cellular automata traffic flow model with negative exponential weighted look-ahead potential is presented in this paper. By introducing the negative exponential weighting coefficient into the look-ahead potential and endowing the potential of vehicles closer to the driver with a greater coefficient, the modeling process is more suitable for the driver’s random decision-making process which is based on the traffic environment that the driver is facing. The fundamental diagrams for different weighting parameters are obtained by using numerical simulations which show that the negative exponential weighting coefficient has an obvious effect on high density traffic flux. The complex high density non-linear traffic behavior is also reproduced by numerical simulations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11572264, 11172247, 11402214, and 61373009).
Hidden complexity of free energy surfaces for peptide (protein) folding.
Krivov, Sergei V; Karplus, Martin
2004-10-12
An understanding of the thermodynamics and kinetics of protein folding requires a knowledge of the free energy surface governing the motion of the polypeptide chain. Because of the many degrees of freedom involved, surfaces projected on only one or two progress variables are generally used in descriptions of the folding reaction. Such projections result in relatively smooth surfaces, but they could mask the complexity of the unprojected surface. Here we introduce an approach to determine the actual (unprojected) free energy surface and apply it to the second beta-hairpin of protein G, which has been used as a model system for protein folding. The surface is represented by a disconnectivity graph calculated from a long equilibrium folding-unfolding trajectory. The denatured state is found to have multiple low free energy basins. Nevertheless, the peptide shows exponential kinetics in folding to the native basin. Projected surfaces obtained from the present analysis have a simple form in agreement with other studies of the beta-hairpin. The hidden complexity found for the beta-hairpin surface suggests that the standard funnel picture of protein folding should be revisited.
Prediction of a service demand using combined forecasting approach
NASA Astrophysics Data System (ADS)
Zhou, Ling
2017-08-01
Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement
Gustman, Alan L.; Steinmeier, Thomas L.
2012-01-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest. Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used. PMID:22711946
Policy Effects in Hyperbolic vs. Exponential Models of Consumption and Retirement.
Gustman, Alan L; Steinmeier, Thomas L
2012-06-01
This paper constructs a structural retirement model with hyperbolic preferences and uses it to estimate the effect of several potential Social Security policy changes. Estimated effects of policies are compared using two models, one with hyperbolic preferences and one with standard exponential preferences. Sophisticated hyperbolic discounters may accumulate substantial amounts of wealth for retirement. We find it is frequently difficult to distinguish empirically between models with the two types of preferences on the basis of asset accumulation paths or consumption paths around the period of retirement. Simulations suggest that, despite the much higher initial time preference rate, individuals with hyperbolic preferences may actually value a real annuity more than individuals with exponential preferences who have accumulated roughly equal amounts of assets. This appears to be especially true for individuals with relatively high time preference rates or who have low assets for whatever reason. This affects the tradeoff between current benefits and future benefits on which many of the retirement incentives of the Social Security system rest.Simulations involving increasing the early entitlement age and increasing the delayed retirement credit do not show a great deal of difference whether exponential or hyperbolic preferences are used, but simulations for eliminating the earnings test show a non-trivially greater effect when exponential preferences are used.
NASA Astrophysics Data System (ADS)
Ernazarov, K. K.
2017-12-01
We consider a (m + 2)-dimensional Einstein-Gauss-Bonnet (EGB) model with the cosmological Λ-term. We restrict the metrics to be diagonal ones and find for certain Λ = Λ(m) class of cosmological solutions with non-exponential time dependence of two scale factors of dimensions m > 2 and 1. Any solution from this class describes an accelerated expansion of m-dimensional subspace and tends asymptotically to isotropic solution with exponential dependence of scale factors.
A method for nonlinear exponential regression analysis
NASA Technical Reports Server (NTRS)
Junkin, B. G.
1971-01-01
A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.
Effects of turbulent hyporheic mixing on reach-scale solute transport
NASA Astrophysics Data System (ADS)
Roche, K. R.; Li, A.; Packman, A. I.
2017-12-01
Turbulence rapidly mixes solutes and fine particles into coarse-grained streambeds. Both hyporheic exchange rates and spatial variability of hyporheic mixing are known to be controlled by turbulence, but it is unclear how turbulent mixing influences mass transport at the scale of stream reaches. We used a process-based particle-tracking model to simulate local- and reach-scale solute transport for a coarse-bed stream. Two vertical mixing profiles, one with a smooth transition from in-stream to hyporheic transport conditions and a second with enhanced turbulent transport at the sediment-water interface, were fit to steady-state subsurface concentration profiles observed in laboratory experiments. The mixing profile with enhanced interfacial transport better matched the observed concentration profiles and overall mass retention in the streambed. The best-fit mixing profiles were then used to simulate upscaled solute transport in a stream. Enhanced mixing coupled in-stream and hyporheic solute transport, causing solutes exchanged into the shallow subsurface to have travel times similar to the water column. This extended the exponential region of the in-stream solute breakthrough curve, and delayed the onset of the heavy power-law tailing induced by deeper and slower hyporheic porewater velocities. Slopes of observed power-law tails were greater than those predicted from stochastic transport theory, and also changed in time. In addition, rapid hyporheic transport velocities truncated the hyporheic residence time distribution by causing mass to exit the stream reach via subsurface advection, yielding strong exponential tempering in the in-stream breakthrough curves at the timescale of advective hyporheic transport through the reach. These results show that strong turbulent mixing across the sediment-water interface violates the conventional separation of surface and subsurface flows used in current models for solute transport in rivers. Instead, the full distribution of flow and mixing over the surface-subsurface continuum must be explicitly considered to properly interpret solute transport in coarse-bed streams.
Recursive least squares estimation and its application to shallow trench isolation
NASA Astrophysics Data System (ADS)
Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.
2003-06-01
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
Abusam, A; Keesman, K J
2009-01-01
The double exponential settling model is the widely accepted model for wastewater secondary settling tanks. However, this model does not estimate accurately solids concentrations in the settler underflow stream, mainly because sludge compression and consolidation processes are not considered. In activated sludge systems, accurate estimation of the solids in the underflow stream will facilitate the calibration process and can lead to correct estimates of particularly kinetic parameters related to biomass growth. Using principles of compaction and consolidation, as in soil mechanics, a dynamic model of the sludge consolidation processes taking place in the secondary settling tanks is developed and incorporated to the commonly used double exponential settling model. The modified double exponential model is calibrated and validated using data obtained from a full-scale wastewater treatment plant. Good agreement between predicted and measured data confirmed the validity of the modified model.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Evaluation of induced seismicity forecast models in the Induced Seismicity Test Bench
NASA Astrophysics Data System (ADS)
Király, Eszter; Gischig, Valentin; Zechar, Jeremy; Doetsch, Joseph; Karvounis, Dimitrios; Wiemer, Stefan
2016-04-01
Induced earthquakes often accompany fluid injection, and the seismic hazard they pose threatens various underground engineering projects. Models to monitor and control induced seismic hazard with traffic light systems should be probabilistic, forward-looking, and updated as new data arrive. Here, we propose an Induced Seismicity Test Bench to test and rank such models. We apply the test bench to data from the Basel 2006 and Soultz-sous-Forêts 2004 geothermal stimulation projects, and we assess forecasts from two models that incorporate a different mix of physical understanding and stochastic representation of the induced sequences: Shapiro in Space (SiS) and Hydraulics and Seismics (HySei). SiS is based on three pillars: the seismicity rate is computed with help of the seismogenic index and a simple exponential decay of the seismicity; the magnitude distribution follows the Gutenberg-Richter relation; and seismicity is distributed in space based on smoothing seismicity during the learning period with 3D Gaussian kernels. The HySei model describes seismicity triggered by pressure diffusion with irreversible permeability enhancement. Our results show that neither model is fully superior to the other. HySei forecasts the seismicity rate well, but is only mediocre at forecasting the spatial distribution. On the other hand, SiS forecasts the spatial distribution well but not the seismicity rate. The shut-in phase is a difficult moment for both models in both reservoirs: the models tend to underpredict the seismicity rate around, and shortly after, shut-in. Ensemble models that combine HySei's rate forecast with SiS's spatial forecast outperform each individual model.
Exponential quantum spreading in a class of kicked rotor systems near high-order resonances
NASA Astrophysics Data System (ADS)
Wang, Hailong; Wang, Jiao; Guarneri, Italo; Casati, Giulio; Gong, Jiangbin
2013-11-01
Long-lasting exponential quantum spreading was recently found in a simple but very rich dynamical model, namely, an on-resonance double-kicked rotor model [J. Wang, I. Guarneri, G. Casati, and J. B. Gong, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.234104 107, 234104 (2011)]. The underlying mechanism, unrelated to the chaotic motion in the classical limit but resting on quasi-integrable motion in a pseudoclassical limit, is identified for one special case. By presenting a detailed study of the same model, this work offers a framework to explain long-lasting exponential quantum spreading under much more general conditions. In particular, we adopt the so-called “spinor” representation to treat the kicked-rotor dynamics under high-order resonance conditions and then exploit the Born-Oppenheimer approximation to understand the dynamical evolution. It is found that the existence of a flat band (or an effectively flat band) is one important feature behind why and how the exponential dynamics emerges. It is also found that a quantitative prediction of the exponential spreading rate based on an interesting and simple pseudoclassical map may be inaccurate. In addition to general interests regarding the question of how exponential behavior in quantum systems may persist for a long time scale, our results should motivate further studies toward a better understanding of high-order resonance behavior in δ-kicked quantum systems.
A decades-long fast-rise-exponential-decay flare in low-luminosity AGN NGC 7213
NASA Astrophysics Data System (ADS)
Yan, Zhen; Xie, Fu-Guo
2018-03-01
We analysed the four-decades-long X-ray light curve of the low-luminosity active galactic nucleus (LLAGN) NGC 7213 and discovered a fast-rise-exponential-decay (FRED) pattern, i.e. the X-ray luminosity increased by a factor of ≈4 within 200 d, and then decreased exponentially with an e-folding time ≈8116 d (≈22.2 yr). For the theoretical understanding of the observations, we examined three variability models proposed in the literature: the thermal-viscous disc instability model, the radiation pressure instability model, and the TDE model. We find that a delayed tidal disruption of a main-sequence star is most favourable; either the thermal-viscous disc instability model or radiation pressure instability model fails to explain some key properties observed, thus we argue them unlikely.
Bennett, Kevin M; Schmainda, Kathleen M; Bennett, Raoqiong Tong; Rowe, Daniel B; Lu, Hanbing; Hyde, James S
2003-10-01
Experience with diffusion-weighted imaging (DWI) shows that signal attenuation is consistent with a multicompartmental theory of water diffusion in the brain. The source of this so-called nonexponential behavior is a topic of debate, because the cerebral cortex contains considerable microscopic heterogeneity and is therefore difficult to model. To account for this heterogeneity and understand its implications for current models of diffusion, a stretched-exponential function was developed to describe diffusion-related signal decay as a continuous distribution of sources decaying at different rates, with no assumptions made about the number of participating sources. DWI experiments were performed using a spin-echo diffusion-weighted pulse sequence with b-values of 500-6500 s/mm(2) in six rats. Signal attenuation curves were fit to a stretched-exponential function, and 20% of the voxels were better fit to the stretched-exponential model than to a biexponential model, even though the latter model had one more adjustable parameter. Based on the calculated intravoxel heterogeneity measure, the cerebral cortex contains considerable heterogeneity in diffusion. The use of a distributed diffusion coefficient (DDC) is suggested to measure mean intravoxel diffusion rates in the presence of such heterogeneity. Copyright 2003 Wiley-Liss, Inc.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
NASA Astrophysics Data System (ADS)
Wan, Ling; Wang, Tao
2017-06-01
We consider the Navier-Stokes equations for compressible heat-conducting ideal polytropic gases in a bounded annular domain when the viscosity and thermal conductivity coefficients are general smooth functions of temperature. A global-in-time, spherically or cylindrically symmetric, classical solution to the initial boundary value problem is shown to exist uniquely and converge exponentially to the constant state as the time tends to infinity under certain assumptions on the initial data and the adiabatic exponent γ. The initial data can be large if γ is sufficiently close to 1. These results are of Nishida-Smoller type and extend the work (Liu et al. (2014) [16]) restricted to the one-dimensional flows.
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.
2006-06-01
It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.
The generalized truncated exponential distribution as a model for earthquake magnitudes
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2015-04-01
The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.
CMB constraints on β-exponential inflationary models
NASA Astrophysics Data System (ADS)
Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.
2018-03-01
We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Rill, Randolph L; Beheshti, Afshin; Van Winkle, David H
2002-08-01
Electrophoretic mobilities of DNA molecules ranging in length from 200 to 48 502 base pairs (bp) were measured in agarose gels with concentrations T = 0.5% to 1.3% at electric fields from E = 0.71 to 5.0 V/cm. This broad data set determines a range of conditions over which the new interpolation equation nu(L) = (beta+alpha(1+exp(-L/gamma))(-1) can be used to relate mobility to length with high accuracy. Mobility data were fit with chi(2) > 0.999 for all gel concentrations and fields ranging from 2.5 to 5 V/cm, and for lower fields at low gel concentrations. Analyses using so-called reptation plots (Rousseau, J., Drouin, G., Slater, G. W., Phys. Rev. Lett. 1997, 79, 1945-1948) indicate that this simple exponential relation is obeyed well when there is a smooth transition from the Ogston sieving regime to the reptation regime with increasing DNA length. Deviations from this equation occur when DNA migration is hindered, apparently by entropic-trapping, which is favored at low fields and high gel concentrations in the ranges examined.
Stock, Philipp; Utzig, Thomas; Valtiner, Markus
2015-05-15
By virtue of its importance for self-organization of biological matter the hydrophobic force law and the range of hydrophobic interactions (HI) have been debated extensively over the last 40 years. Here, we directly measure and quantify the hydrophobic force-distance law over large temperature and concentration ranges. In particular, we study the HI between molecularly smooth hydrophobic self-assembled monolayers, and similarly modified gold-coated AFM tips (radii∼8-50 nm). We present quantitative and direct evidence that the hydrophobic force is both long-ranged and exponential down to distances of about 1-2 nm. Therefore, we introduce a self-consistent radius-normalization for atomic force microscopy data. This approach allows quantitative data fitting of AFM-based experimental data to the recently proposed Hydra-model. With a statistical significance of r(2)⩾0.96 our fitting and data directly reveal an exponential HI decay length of 7.2±1.2 Å that is independent of the salt concentration up to 750 mM. As such, electrostatic screening does not have a significant influence on the HI in electrolyte concentrations ranging from 1 mM to 750 mM. In 1 M solutions the observed instability during approach shifts to longer distances, indicating ion correlation/adsorption effects at high salt concentrations. With increasing temperature the magnitude of HI decreases monotonically, while the range increases slightly. We compare our results to the large body of available literature, and shed new light into range and magnitude of hydrophobic interactions at very close distances and over wide temperature and concentration regimes. Copyright © 2015 Elsevier Inc. All rights reserved.
Income Smoothing: Methodology and Models.
1986-05-01
studies have all followed a similar research process (Figure 1). All were expost studies and included the following steps: 1. A smoothing technique(s) or...researcher methodological decisions used in past empirical studies of income smoothing (design type, smoothing device norm, and income target) are discussed...behavior. The identification of smoothing, and consequently the conclusions to be drawn from smoothing studies , is found to be sensitive to the three
The area of isodensity contours in cosmological models and galaxy surveys
NASA Technical Reports Server (NTRS)
Ryden, Barbara S.; Melott, Adrian L.; Craig, David A.; Gott, J. Richard, III; Weinberg, David H.
1989-01-01
The contour crossing statistic, defined as the mean number of times per unit length that a straight line drawn through the field crosses a given contour, is applied to model density fields and to smoothed samples of galaxies. Models in which the matter is in a bubble structure, in a filamentary net, or in clusters can be distinguished from Gaussian density distributions. The shape of the contour crossing curve in the initially Gaussian fields considered remains Gaussian after gravitational evolution and biasing, as long as the smoothing length is longer than the mass correlation length. With a smoothing length of 5/h Mpc, models containing cosmic strings are indistinguishable from Gaussian distributions. Cosmic explosion models are significantly non-Gaussian, having a bubbly structure. Samples from the CfA survey and the Haynes and Giovanelli (1986) survey are more strongly non-Gaussian at a smoothing length of 6/h Mpc than any of the models examined. At a smoothing length of 12/h Mpc, the Haynes and Giovanelli sample appears Gaussian.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S P; Bhatia, Kunwar S; Wang, Yi-Xiang J; Ahuja, Anil T; King, Ann D
2014-01-01
To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm(2). DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization.
Wave attenuation in the shallows of San Francisco Bay
Lacy, Jessica R.; MacVean, Lissa J.
2016-01-01
Waves propagating over broad, gently-sloped shallows decrease in height due to frictional dissipation at the bed. We quantified wave-height evolution across 7 km of mudflat in San Pablo Bay (northern San Francisco Bay), an environment where tidal mixing prevents the formation of fluid mud. Wave height was measured along a cross shore transect (elevation range−2mto+0.45mMLLW) in winter 2011 and summer 2012. Wave height decreased more than 50% across the transect. The exponential decay coefficient λ was inversely related to depth squared (λ=6×10−4h−2). The physical roughness length scale kb, estimated from near-bed turbulence measurements, was 3.5×10−3 m in winter and 1.1×10−2 m in summer. Estimated wave friction factor fw determined from wave-height data suggests that bottom friction dominates dissipation at high Rew but not at low Rew. Predictions of near-shore wave height based on offshore wave height and a rough formulation for fw were quite accurate, with errors about half as great as those based on the smooth formulation for fw. Researchers often assume that the wave boundary layer is smooth for settings with fine-grained sediments. At this site, use of a smooth fw results in an underestimate of wave shear stress by a factor of 2 for typical waves and as much as 5 for more energetic waves. It also inadequately captures the effectiveness of the mudflats in protecting the shoreline through wave attenuation.
Infantile Nystagmus and Abnormalities of Conjugate Eye Movements in Down Syndrome.
Weiss, Avery H; Kelly, John P; Phillips, James O
2016-03-01
Subjects with Down syndrome (DS) have an anatomical defect within the cerebellum that may impact downstream oculomotor areas. This study characterized gaze holding and gains for smooth pursuit, saccades, and optokinetic nystagmus (OKN) in DS children with infantile nystagmus (IN). Clinical data of 18 DS children with IN were reviewed retrospectively. Subjects with constant strabismus were excluded to remove any contribution of latent nystagmus. Gaze-holding, horizontal and vertical saccades to target steps, horizontal smooth pursuit of drifting targets, OKN in response to vertically or horizontally-oriented square wave gratings drifted at 15°/s, 30°/s, and 45°/s were recorded using binocular video-oculography. Seven subjects had additional optical coherence tomography imaging. Infantile nystagmus was associated with one or more gaze-holding instabilities (GHI) in each subject. The majority of subjects had a combination of conjugate horizontal jerk with constant or exponential slow-phase velocity, asymmetric or symmetric, and either monocular or binocular pendular nystagmus. Six of seven subjects had mild (Grade 0-1) persistence of retinal layers overlying the fovea, similar to that reported in DS children without nystagmus. All subjects had abnormal gains across one or more stimulus conditions (horizontal smooth pursuit, saccades, or OKN). Saccade velocities followed the main sequence. Down syndrome subjects with IN show a wide range of GHI and abnormalities of conjugate eye movements. We propose that these ocular motor abnormalities result from functional abnormalities of the cerebellum and/or downstream oculomotor circuits, perhaps due to extensive miswiring.
Is a matrix exponential specification suitable for the modeling of spatial correlation structures?
Strauß, Magdalena E.; Mezzetti, Maura; Leorato, Samantha
2018-01-01
This paper investigates the adequacy of the matrix exponential spatial specifications (MESS) as an alternative to the widely used spatial autoregressive models (SAR). To provide as complete a picture as possible, we extend the analysis to all the main spatial models governed by matrix exponentials comparing them with their spatial autoregressive counterparts. We propose a new implementation of Bayesian parameter estimation for the MESS model with vague prior distributions, which is shown to be precise and computationally efficient. Our implementations also account for spatially lagged regressors. We further allow for location-specific heterogeneity, which we model by including spatial splines. We conclude by comparing the performances of the different model specifications in applications to a real data set and by running simulations. Both the applications and the simulations suggest that the spatial splines are a flexible and efficient way to account for spatial heterogeneities governed by unknown mechanisms. PMID:29492375
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Red blood cell use in Switzerland: trends and demographic challenges
Volken, Thomas; Buser, Andreas; Castelli, Damiano; Fontana, Stefano; Frey, Beat M.; Rüsges-Wolter, Ilka; Sarraj, Amira; Sigle, Jörg; Thierbach, Jutta; Weingand, Tina; Taleghani, Behrouz Mansouri
2018-01-01
Background Several studies have raised concerns that future demand for blood products may not be met. The ageing of the general population and the fact that a large proportion of blood products is transfused to elderly patients has been identified as an important driver of blood shortages. The aim of this study was to collect, for the first time, nationally representative data regarding blood donors and transfusion recipients in order to predict the future evolution of blood donations and red blood cell (RBC) use in Switzerland between 2014 and 2035. Materials and methods Blood donor and transfusion recipient data, subdivided by the subjects’ age and gender were obtained from Regional Blood Services and nine large, acute-care hospitals in various regions of Switzerland. Generalised additive regression models and time-series models with exponential smoothing were employed to estimate trends of whole blood donations and RBC transfusions. Results The trend models employed suggested that RBC demand could equal supply by 2018 and could eventually cause an increasing shortfall of up to 77,000 RBC units by 2035. Discussion Our study highlights the need for continuous monitoring of trends of blood donations and blood transfusions in order to take proactive measures aimed at preventing blood shortages in Switzerland. Measures should be taken to improve donor retention in order to prevent a further erosion of the blood donor base. PMID:27723455
NASA Astrophysics Data System (ADS)
Feng-Hua, Zhang; Gui-De, Zhou; Kun, Ma; Wen-Juan, Ma; Wen-Yuan, Cui; Bo, Zhang
2016-07-01
Previous studies have shown that, for the three main stages of the development and evolution of asymptotic giant branch (AGB) star s-process models, the neutron exposure distribution (DNE) in the nucleosynthesis region can always be considered as an exponential function, i.e., ρAGB(τ) = C/τ0 exp(-τ/τ0) in an effective range of the neutron exposure values. However, the specific expressions of the proportion factor C and the mean neutron exposure τ0 in the exponential distribution function for different models are not completely determined in the related literature. Through dissecting the basic method to obtain the exponential DNE, and systematically analyzing the solution procedures of neutron exposure distribution functions in different stellar models, the general formulae, as well as their auxiliary equations, for calculating C and τ0 are derived. Given the discrete neutron exposure distribution Pk, the relationships of C and τ0 with the model parameters can be determined. The result of this study has effectively solved the problem to analytically calculate the DNE in the current low-mass AGB star s-process nucleosynthesis model of 13C-pocket radiative burning.
NASA Astrophysics Data System (ADS)
Ijjas, Anna; Steinhardt, Paul J.
2015-10-01
We introduce ``anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariant spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.
Luo, Li; Luo, Le; Zhang, Xinli; He, Xiaoli
2017-07-10
Accurate forecasting of hospital outpatient visits is beneficial for the reasonable planning and allocation of healthcare resource to meet the medical demands. In terms of the multiple attributes of daily outpatient visits, such as randomness, cyclicity and trend, time series methods, ARIMA, can be a good choice for outpatient visits forecasting. On the other hand, the hospital outpatient visits are also affected by the doctors' scheduling and the effects are not pure random. Thinking about the impure specialty, this paper presents a new forecasting model that takes cyclicity and the day of the week effect into consideration. We formulate a seasonal ARIMA (SARIMA) model on a daily time series and then a single exponential smoothing (SES) model on the day of the week time series, and finally establish a combinatorial model by modifying them. The models are applied to 1 year of daily visits data of urban outpatients in two internal medicine departments of a large hospital in Chengdu, for forecasting the daily outpatient visits about 1 week ahead. The proposed model is applied to forecast the cross-sectional data for 7 consecutive days of daily outpatient visits over an 8-weeks period based on 43 weeks of observation data during 1 year. The results show that the two single traditional models and the combinatorial model are simplicity of implementation and low computational intensiveness, whilst being appropriate for short-term forecast horizons. Furthermore, the combinatorial model can capture the comprehensive features of the time series data better. Combinatorial model can achieve better prediction performance than the single model, with lower residuals variance and small mean of residual errors which needs to be optimized deeply on the next research step.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cao, Boqiang; Zhang, Qimin; Ye, Ming
2016-11-29
We present a mean-square exponential stability analysis for impulsive stochastic genetic regulatory networks (GRNs) with time-varying delays and reaction-diffusion driven by fractional Brownian motion (fBm). By constructing a Lyapunov functional and using linear matrix inequality for stochastic analysis we derive sufficient conditions to guarantee the exponential stability of the stochastic model of impulsive GRNs in the mean-square sense. Meanwhile, the corresponding results are obtained for the GRNs with constant time delays and standard Brownian motion. Finally, an example is presented to illustrate our results of the mean-square exponential stability analysis.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Confronting quasi-exponential inflation with WMAP seven
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pal, Barun Kumar; Pal, Supratik; Basu, B., E-mail: barunp1985@rediffmail.com, E-mail: pal@th.physik.uni-bonn.de, E-mail: banasri@isical.ac.in
2012-04-01
We confront quasi-exponential models of inflation with WMAP seven years dataset using Hamilton Jacobi formalism. With a phenomenological Hubble parameter, representing quasi exponential inflation, we develop the formalism and subject the analysis to confrontation with WMAP seven using the publicly available code CAMB. The observable parameters are found to fair extremely well with WMAP seven. We also obtain a ratio of tensor to scalar amplitudes which may be detectable in PLANCK.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS
NASA Astrophysics Data System (ADS)
Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin
2015-08-01
Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.
Periodic orbit spectrum in terms of Ruelle-Pollicott resonances
NASA Astrophysics Data System (ADS)
Leboeuf, P.
2004-02-01
Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.
SMOOTHING ROTATION CURVES AND MASS PROFILES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrier, Joel C.; Sellwood, J. A.
2015-02-01
We show that spiral activity can erase pronounced features in disk galaxy rotation curves. We present simulations of growing disks, in which the added material has a physically motivated distribution, as well as other examples of physically less realistic accretion. In all cases, attempts to create unrealistic rotation curves were unsuccessful because spiral activity rapidly smoothed away features in the disk mass profile. The added material was redistributed radially by the spiral activity, which was itself provoked by the density feature. In the case of a ridge-like feature in the surface density profile, we show that two unstable spiral modesmore » develop, and the associated angular momentum changes in horseshoe orbits remove particles from the ridge and spread them both inward and outward. This process rapidly erases the density feature from the disk. We also find that the lack of a feature when transitioning from disk to halo dominance in the rotation curves of disk galaxies, the so called ''disk-halo conspiracy'', could also be accounted for by this mechanism. We do not create perfectly exponential mass profiles in the disk, but suggest that this mechanism contributes to their creation.« less
Cloud Forecast Simulation Model.
1981-10-01
creasing the kurtosis of the distribution, i.e., making it more negative (more platykurtic ). Case (a) might be the distribution of forecast cloud cover be...fore smoothing, and (b) might be the distribution after smoothing. Character- istically, smoothing makes cloud cover distributions less platykurtic ...19, this effect of smoothing can be described in terms of making the smoothed distribu- tion less platykurtic than the unsmoothed distribution
Investigation of advanced UQ for CRUD prediction with VIPRE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Michael Scott
2011-09-01
This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less
Cospectral budget of turbulence explains the bulk properties of smooth pipe flow.
Katul, Gabriel G; Manes, Costantino
2014-12-01
Connections between the wall-normal turbulent velocity spectrum E(ww)(k) at wave number k and the mean velocity profile (MVP) are explored in pressure-driven flows confined within smooth walls at moderate to high bulk Reynolds numbers (Re). These connections are derived via a cospectral budget for the longitudinal (u') and wall-normal (w') velocity fluctuations, which include a production term due to mean shear interacting with E(ww)(k), viscous effects, and a decorrelation between u' and w' by pressure-strain effects [=π(k)]. The π(k) is modeled using a conventional Rotta-like return-to-isotropy closure but adjusted to include the effects of isotropization of the production term. The resulting cospectral budget yields a generalization of a previously proposed "spectral link" between the MVP and the spectrum of turbulence. The proposed cospectral budget is also shown to reproduce the measured MVP across the pipe with changing Re including the MVP shapes in the buffer and wake regions. Because of the links between E(ww)(k) and the MVP, the effects of intermittency corrections to inertial subrange scales and the so-called spectral bottleneck reported as k approaches viscous dissipation eddy sizes (η) on the MVP shapes are investigated and shown to be of minor importance. Inclusion of a local Reynolds number correction to a parameter associated with the spectral exponential cutoff as kη→1 appears to be more significant to the MVP shape in the buffer region. While the bulk shape of the MVP is reasonably reproduced in all regions of the pipe, the solution to the cospectral budget systematically underestimates the negative curvature of the MVP within the buffer layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ijjas, Anna; Steinhardt, Paul J., E-mail: aijjas@princeton.edu, E-mail: steinh@princeton.edu
We introduce ''anamorphic'' cosmology, an approach for explaining the smoothness and flatness of the universe on large scales and the generation of a nearly scale-invariant spectrum of adiabatic density perturbations. The defining feature is a smoothing phase that acts like a contracting universe based on some Weyl frame-invariant criteria and an expanding universe based on other frame-invariant criteria. An advantage of the contracting aspects is that it is possible to avoid the multiverse and measure problems that arise in inflationary models. Unlike ekpyrotic models, anamorphic models can be constructed using only a single field and can generate a nearly scale-invariantmore » spectrum of tensor perturbations. Anamorphic models also differ from pre-big bang and matter bounce models that do not explain the smoothness. We present some examples of cosmological models that incorporate an anamorphic smoothing phase.« less
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
A stochastic evolutionary model generating a mixture of exponential distributions
NASA Astrophysics Data System (ADS)
Fenner, Trevor; Levene, Mark; Loizou, George
2016-02-01
Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2015-01-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling, or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM’s expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses. PMID:26166910
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Verification of the exponential model of body temperature decrease after death in pigs.
Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal
2005-09-01
The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.
NASA Astrophysics Data System (ADS)
Kuai, Zi-Xiang; Liu, Wan-Yu; Zhu, Yue-Min
2017-11-01
The aim of this work was to investigate the effect of multiple perfusion components on the pseudo-diffusion coefficient D * in the bi-exponential intravoxel incoherent motion (IVIM) model. Simulations were first performed to examine how the presence of multiple perfusion components influences D *. The real data of livers (n = 31), spleens (n = 31) and kidneys (n = 31) of 31 volunteers was then acquired using DWI for in vivo study and the number of perfusion components in these tissues was determined together with their perfusion fraction and D *, using an adaptive multi-exponential IVIM model. Finally, the bi-exponential model was applied to the real data and the mean, standard variance and coefficient of variation of D * as well as the fitting residual were calculated over the 31 volunteers for each of the three tissues and compared between them. The results of both the simulations and the in vivo study showed that, for the bi-exponential IVIM model, both the variance of D * and the fitting residual tended to increase when the number of perfusion components was increased or when the difference between perfusion components became large. In addition, it was found that the kidney presented the fewest perfusion components among the three tissues. The present study demonstrated that multi-component perfusion is a main factor that causes high variance of D * and the bi-exponential model should be used only when the tissues under investigation have few perfusion components, for example the kidney.
Yuan, Jing; Yeung, David Ka Wai; Mok, Greta S. P.; Bhatia, Kunwar S.; Wang, Yi-Xiang J.; Ahuja, Anil T.; King, Ann D.
2014-01-01
Purpose To technically investigate the non-Gaussian diffusion of head and neck diffusion weighted imaging (DWI) at 3 Tesla and compare advanced non-Gaussian diffusion models, including diffusion kurtosis imaging (DKI), stretched-exponential model (SEM), intravoxel incoherent motion (IVIM) and statistical model in the patients with nasopharyngeal carcinoma (NPC). Materials and Methods After ethics approval was granted, 16 patients with NPC were examined using DWI performed at 3T employing an extended b-value range from 0 to 1500 s/mm2. DWI signals were fitted to the mono-exponential and non-Gaussian diffusion models on primary tumor, metastatic node, spinal cord and muscle. Non-Gaussian parameter maps were generated and compared to apparent diffusion coefficient (ADC) maps in NPC. Results Diffusion in NPC exhibited non-Gaussian behavior at the extended b-value range. Non-Gaussian models achieved significantly better fitting of DWI signal than the mono-exponential model. Non-Gaussian diffusion coefficients were substantially different from mono-exponential ADC both in magnitude and histogram distribution. Conclusion Non-Gaussian diffusivity in head and neck tissues and NPC lesions could be assessed by using non-Gaussian diffusion models. Non-Gaussian DWI analysis may reveal additional tissue properties beyond ADC and holds potentials to be used as a complementary tool for NPC characterization. PMID:24466318
Creation of current filaments in the solar corona
NASA Technical Reports Server (NTRS)
Mikic, Z.; Schnack, D. D.; Van Hoven, G.
1989-01-01
It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.
A Numerical, Literal, and Converged Perturbation Algorithm
NASA Astrophysics Data System (ADS)
Wiesel, William E.
2017-09-01
The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.
Finite-temperature effects in helical quantum turbulence
NASA Astrophysics Data System (ADS)
Clark Di Leoni, Patricio; Mininni, Pablo D.; Brachet, Marc E.
2018-04-01
We perform a study of the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the stochastic Ginzburg-Landau equations, using up to 40963 grid points with a pseudospectral method. We show that for temperatures close to the critical one, the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide Ansätze for the effective viscosity and friction as a function of the temperature.
GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data
NASA Astrophysics Data System (ADS)
Ibrahim, Noor Akma; Suliadi
2010-11-01
In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.
Cocho, Germinal; Miramontes, Pedro; Mansilla, Ricardo; Li, Wentian
2014-12-01
We examine the relationship between exponential correlation functions and Markov models in a bacterial genome in detail. Despite the well known fact that Markov models generate sequences with correlation function that decays exponentially, simply constructed Markov models based on nearest-neighbor dimer (first-order), trimer (second-order), up to hexamer (fifth-order), and treating the DNA sequence as being homogeneous all fail to predict the value of exponential decay rate. Even reading-frame-specific Markov models (both first- and fifth-order) could not explain the fact that the exponential decay is very slow. Starting with the in-phase coding-DNA-sequence (CDS), we investigated correlation within a fixed-codon-position subsequence, and in artificially constructed sequences by packing CDSs with out-of-phase spacers, as well as altering CDS length distribution by imposing an upper limit. From these targeted analyses, we conclude that the correlation in the bacterial genomic sequence is mainly due to a mixing of heterogeneous statistics at different codon positions, and the decay of correlation is due to the possible out-of-phase between neighboring CDSs. There are also small contributions to the correlation from bases at the same codon position, as well as by non-coding sequences. These show that the seemingly simple exponential correlation functions in bacterial genome hide a complexity in correlation structure which is not suitable for a modeling by Markov chain in a homogeneous sequence. Other results include: use of the (absolute value) second largest eigenvalue to represent the 16 correlation functions and the prediction of a 10-11 base periodicity from the hexamer frequencies. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ihlen, Espen A. F.; van Schooten, Kimberley S.; Bruijn, Sjoerd M.; Pijnappels, Mirjam; van Dieën, Jaap H.
2017-01-01
Over the last decades, various measures have been introduced to assess stability during walking. All of these measures assume that gait stability may be equated with exponential stability, where dynamic stability is quantified by a Floquet multiplier or Lyapunov exponent. These specific constructs of dynamic stability assume that the gait dynamics are time independent and without phase transitions. In this case the temporal change in distance, d(t), between neighboring trajectories in state space is assumed to be an exponential function of time. However, results from walking models and empirical studies show that the assumptions of exponential stability break down in the vicinity of phase transitions that are present in each step cycle. Here we apply a general non-exponential construct of gait stability, called fractional stability, which can define dynamic stability in the presence of phase transitions. Fractional stability employs the fractional indices, α and β, of differential operator which allow modeling of singularities in d(t) that cannot be captured by exponential stability. The fractional stability provided an improved fit of d(t) compared to exponential stability when applied to trunk accelerations during daily-life walking in community-dwelling older adults. Moreover, using multivariate empirical mode decomposition surrogates, we found that the singularities in d(t), which were well modeled by fractional stability, are created by phase-dependent modulation of gait. The new construct of fractional stability may represent a physiologically more valid concept of stability in vicinity of phase transitions and may thus pave the way for a more unified concept of gait stability. PMID:28900400
Stavn, R H
1988-01-15
The role of the Lambert-Beer law in ocean optics is critically examined. The Lambert-Beer law and the three-parameter model of the submarine light field are used to construct an optical energy budget for any hydrosol. It is further applied to the analytical exponential decay coefficient of the light field and used to estimate the optical properties and effects of the dissolved/suspended component in upper ocean layers. The concepts of the empirical exponential decay coefficient (diffuse attenuation coefficient) of the light field and a constant exponential decay coefficient for molecular water are analyzed quantitatively. A constant exponential decay coefficient for water is rejected. The analytical exponential decay coefficient is used to analyze optical gradients in ocean waters.
A review of the matrix-exponential formalism in radiative transfer
NASA Astrophysics Data System (ADS)
Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian
2017-07-01
This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.
A SAS IML Macro for Loglinear Smoothing
ERIC Educational Resources Information Center
Moses, Tim; von Davier, Alina
2011-01-01
Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler.
Jin, Ick Hoon; Yuan, Ying; Liang, Faming
2013-10-01
Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency.
Fracture analysis of a central crack in a long cylindrical superconductor with exponential model
NASA Astrophysics Data System (ADS)
Zhao, Yu Feng; Xu, Chi
2018-05-01
The fracture behavior of a long cylindrical superconductor is investigated by modeling a central crack that is induced by electromagnetic force. Based on the exponential model, the stress intensity factors (SIFs) with the dimensionless parameter p and the length of the crack a/R for the zero-field cooling (ZFC) and field-cooling (FC) processes are numerically simulated using the finite element method (FEM) and assuming a persistent current flow. As the applied field Ba decreases, the dependence of p and a/R on the SIFs in the ZFC process is exactly opposite to that observed in the FC process. Numerical results indicate that the exponential model exhibits different characteristics for the trend of the SIFs from the results obtained using the Bean and Kim models. This implies that the crack length and the trapped field have significant effects on the fracture behavior of bulk superconductors. The obtained results are useful for understanding the critical-state model of high-temperature superconductors in crack problem.
Role of mast cells in bronchial contraction in nonallergic obstructive lung pathology.
Kuzubova, Nataliya A; Lebedeva, Elena S; Titova, Olga N; Fedin, Anatoliy N; Dvorakovskaya, Ivetta V
2017-01-01
The role of mast cells in contractile bronchial smooth muscle activity has been evaluated in a model of chronic obstructive pulmonary disease induced in rats that were intermittently exposed to nitrogen dioxide (NO 2 ) for 60 days. Starting from the 31st day, one group of rats inhaled sodium cromoglycate before exposure to NO 2 to stabilize mast cell membranes. The second group (control) was not treated. Isometric smooth muscle contraction was analysed in isolated bronchial samples in response to nerve and smooth muscle stimulation. Histological analysis revealed large numbers of mast cells in lung tissue of COPD model rats. The inhibition of mast cell degranulation by sodium cromoglycate prevented the development of nerve-stimulated bronchial smooth muscle hyperactivity in COPD model rats. Histamine or adenosine-induced hyperactivity on nerve stimulation was also inhibited by sodium cromoglycate in bronchial smooth muscle in both control and COPD model rats. This suggests that the mechanism of contractile activity enhancement of bronchial wall smooth muscle cells may be mediated through the activation of resident mast cells transmembrane adenosine receptors resulting in their partial degranulation, with the released histamine acting upon histamine H1-receptors which trigger reflex pathways via intramural ganglion neurons.
Role of mast cells in bronchial contraction in nonallergic obstructive lung pathology
Kuzubova, Nataliya A.; Lebedeva, Elena S.; Titova, Olga N.; Fedin, Anatoliy N.; Dvorakovskaya, Ivetta V.
2017-01-01
Abstract The role of mast cells in contractile bronchial smooth muscle activity has been evaluated in a model of chronic obstructive pulmonary disease induced in rats that were intermittently exposed to nitrogen dioxide (NO2) for 60 days. Starting from the 31st day, one group of rats inhaled sodium cromoglycate before exposure to NO2 to stabilize mast cell membranes. The second group (control) was not treated. Isometric smooth muscle contraction was analysed in isolated bronchial samples in response to nerve and smooth muscle stimulation. Histological analysis revealed large numbers of mast cells in lung tissue of COPD model rats. The inhibition of mast cell degranulation by sodium cromoglycate prevented the development of nerve-stimulated bronchial smooth muscle hyperactivity in COPD model rats. Histamine or adenosine-induced hyperactivity on nerve stimulation was also inhibited by sodium cromoglycate in bronchial smooth muscle in both control and COPD model rats. This suggests that the mechanism of contractile activity enhancement of bronchial wall smooth muscle cells may be mediated through the activation of resident mast cells transmembrane adenosine receptors resulting in their partial degranulation, with the released histamine acting upon histamine H1-receptors which trigger reflex pathways via intramural ganglion neurons. PMID:28867718
Switching control of an R/C hovercraft: stabilization and smooth switching.
Tanaka, K; Iwasaki, M; Wang, H O
2001-01-01
This paper presents stable switching control of an radio-controlled (R/C) hovercraft that is a nonholonomic (nonlinear) system. To exactly represent its nonlinear dynamics, more importantly, to maintain controllability of the system, we newly propose a switching fuzzy model that has locally Takagi-Sugeno (T-S) fuzzy models and switches them according to states, external variables, and/or time. A switching fuzzy controller is constructed by mirroring the rule structure of the switching fuzzy model of an R/C hovercraft. We derive linear matrix inequality (LMI) conditions for ensuring the stability of the closed-loop system consisting of a switching fuzzy model and controller. Furthermore, to guarantee smooth switching of control input at switching boundaries, we also derive a smooth switching condition represented in terms of LMIs. A stable switching fuzzy controller satisfying the smooth switching condition is designed by simultaneously solving both of the LMIs. The simulation and experimental results for the trajectory control of an R/C hovercraft show the validity of the switching fuzzy model and controller design, particularly, the smooth switching condition.
NASA Astrophysics Data System (ADS)
Krugon, Seelam; Nagaraju, Dega
2017-05-01
This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.
2010-06-01
GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non
NASA Astrophysics Data System (ADS)
Ivashchuk, V. D.; Ernazarov, K. K.
2017-01-01
A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.
NASA Astrophysics Data System (ADS)
Elmegreen, Bruce G.
2016-10-01
Exponential radial profiles are ubiquitous in spiral and dwarf Irregular galaxies, but the origin of this structural form is not understood. This talk will review the observations of exponential and double exponential disks, considering both the light and the mass profiles, and the contributions from stars and gas. Several theories for this structure will also be reviewed, including primordial collapse, bar and spiral torques, clump torques, galaxy interactions, disk viscosity and other internal processes of angular momentum exchange, and stellar scattering off of clumpy structure. The only process currently known that can account for this structure in the most theoretically difficult case is stellar scattering off disks clumps. Stellar orbit models suggest that such scattering can produce exponentials even in isolated dwarf irregulars that have no bars or spirals, little shear or viscosity, and profiles that go out too far for the classical Mestel case of primordial collapse with specific angular momentum conservation.
An exactly solvable, spatial model of mutation accumulation in cancer
NASA Astrophysics Data System (ADS)
Paterson, Chay; Nowak, Martin A.; Waclaw, Bartlomiej
2016-12-01
One of the hallmarks of cancer is the accumulation of driver mutations which increase the net reproductive rate of cancer cells and allow them to spread. This process has been studied in mathematical models of well mixed populations, and in computer simulations of three-dimensional spatial models. But the computational complexity of these more realistic, spatial models makes it difficult to simulate realistically large and clinically detectable solid tumours. Here we describe an exactly solvable mathematical model of a tumour featuring replication, mutation and local migration of cancer cells. The model predicts a quasi-exponential growth of large tumours, even if different fragments of the tumour grow sub-exponentially due to nutrient and space limitations. The model reproduces clinically observed tumour growth times using biologically plausible rates for cell birth, death, and migration rates. We also show that the expected number of accumulated driver mutations increases exponentially in time if the average fitness gain per driver is constant, and that it reaches a plateau if the gains decrease over time. We discuss the realism of the underlying assumptions and possible extensions of the model.
Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier
2013-01-01
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782
Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier
2013-06-06
Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Topology and Edge Modes in Quantum Critical Chains
NASA Astrophysics Data System (ADS)
Verresen, Ruben; Jones, Nick G.; Pollmann, Frank
2018-02-01
We show that topology can protect exponentially localized, zero energy edge modes at critical points between one-dimensional symmetry-protected topological phases. This is possible even without gapped degrees of freedom in the bulk—in contrast to recent work on edge modes in gapless chains. We present an intuitive picture for the existence of these edge modes in the case of noninteracting spinless fermions with time-reversal symmetry (BDI class of the tenfold way). The stability of this phenomenon relies on a topological invariant defined in terms of a complex function, counting its zeros and poles inside the unit circle. This invariant can prevent two models described by the same conformal field theory (CFT) from being smoothly connected. A full classification of critical phases in the noninteracting BDI class is obtained: Each phase is labeled by the central charge of the CFT, c ∈1/2 N , and the topological invariant, ω ∈Z . Moreover, c is determined by the difference in the number of edge modes between the phases neighboring the transition. Numerical simulations show that the topological edge modes of critical chains can be stable in the presence of interactions and disorder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Palombo, Marco; Gabrielli, Andrea; De Santis, Silvia; Capuani, Silvia
2012-03-01
In this paper, we investigate the image contrast that characterizes anomalous and non-gaussian diffusion images obtained using the stretched exponential model. This model is based on the introduction of the γ stretched parameter, which quantifies deviation from the mono-exponential decay of diffusion signal as a function of the b-value. To date, the biophysical substrate underpinning the contrast observed in γ maps, in other words, the biophysical interpretation of the γ parameter (or the fractional order derivative in space, β parameter) is still not fully understood, although it has already been applied to investigate both animal models and human brain. Due to the ability of γ maps to reflect additional microstructural information which cannot be obtained using diffusion procedures based on gaussian diffusion, some authors propose this parameter as a measure of diffusion heterogeneity or water compartmentalization in biological tissues. Based on our recent work we suggest here that the coupling between internal and diffusion gradients provide pseudo-superdiffusion effects which are quantified by the stretching exponential parameter γ. This means that the image contrast of Mγ maps reflects local magnetic susceptibility differences (Δχ(m)), thus highlighting better than T(2)(∗) contrast the interface between compartments characterized by Δχ(m). Thanks to this characteristic, Mγ imaging may represent an interesting tool to develop contrast-enhanced MRI for molecular imaging. The spectroscopic and imaging experiments (performed in controlled micro-beads dispersion) that are reported here, strongly suggest internal gradients, and as a consequence Δχ(m), to be an important factor in fully understanding the source of contrast in anomalous diffusion methods that are based on a stretched exponential model analysis of diffusion data obtained at varying gradient strengths g. Copyright © 2012 Elsevier Inc. All rights reserved.
An approach for spherical harmonic analysis of non-smooth data
NASA Astrophysics Data System (ADS)
Wang, Hansheng; Wu, Patrick; Wang, Zhiyong
2006-12-01
A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.
Growth and mortality of larval Myctophum affine (Myctophidae, Teleostei).
Namiki, C; Katsuragawa, M; Zani-Teixeira, M L
2015-04-01
The growth and mortality rates of Myctophum affine larvae were analysed based on samples collected during the austral summer and winter of 2002 from south-eastern Brazilian waters. The larvae ranged in size from 2·75 to 14·00 mm standard length (L(S)). Daily increment counts from 82 sagittal otoliths showed that the age of M. affine ranged from 2 to 28 days. Three models were applied to estimate the growth rate: linear regression, exponential model and Laird-Gompertz model. The exponential model best fitted the data, and L(0) values from exponential and Laird-Gompertz models were close to the smallest larva reported in the literature (c. 2·5 mm L(S)). The average growth rate (0·33 mm day(-1)) was intermediate among lanternfishes. The mortality rate (12%) during the larval period was below average compared with other marine fish species but similar to some epipelagic fishes that occur in the area. © 2015 The Fisheries Society of the British Isles.
1/f oscillations in a model of moth populations oriented by diffusive pheromones
NASA Astrophysics Data System (ADS)
Barbosa, L. A.; Martins, M. L.; Lima, E. R.
2005-01-01
An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.
The Use of Modeling Approach for Teaching Exponential Functions
NASA Astrophysics Data System (ADS)
Nunes, L. F.; Prates, D. B.; da Silva, J. M.
2017-12-01
This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.
SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Duan, J; Popple, R
2014-06-01
Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
A non-Gaussian option pricing model based on Kaniadakis exponential deformation
NASA Astrophysics Data System (ADS)
Moretto, Enrico; Pasquali, Sara; Trivellato, Barbara
2017-09-01
A way to make financial models effective is by letting them to represent the so called "fat tails", i.e., extreme changes in stock prices that are regarded as almost impossible by the standard Gaussian distribution. In this article, the Kaniadakis deformation of the usual exponential function is used to define a random noise source in the dynamics of price processes capable of capturing such real market phenomena.
NASA Astrophysics Data System (ADS)
Fox, J. B.; Thayer, D. W.; Phillips, J. G.
The effect of low dose γ-irradiation on the thiamin content of ground pork was studied in the range of 0-14 kGy at 2°C and at radiation doses from 0.5 to 7 kGy at temperatures -20, 10, 0, 10 and 20°C. The detailed study at 2°C showed that loss of thiamin was exponential down to 0kGy. An exponential expression was derived for the effect of radiation dose and temperature of irradiation on thiamin loss, and compared with a previously derived general linear expression. Both models were accurate depictions of the data, but the exponential expression showed a significant decrease in the rate of loss between 0 and -10°C. This is the range over which water in meat freezes, the decrease being due to the immobolization of reactive radiolytic products of water in ice crystals.
On splice site prediction using weight array models: a comparison of smoothing techniques
NASA Astrophysics Data System (ADS)
Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard
2007-11-01
In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.
Gravitational lensing by a smoothly variable three-dimensional mass distribution
NASA Technical Reports Server (NTRS)
Lee, Man Hoi; Paczynski, Bohdan
1990-01-01
A smooth three-dimensional mass distribution is approximated by a model with multiple thin screens, with surface mass density varying smoothly on each screen. It is found that 16 screens are sufficient for a good approximation of the three-dimensional distribution of matter. It is also found that in this multiscreen model the distribution of amplifications of single images is dominated by the convergence due to matter within the beam. The shear caused by matter outside the beam has no significant effect. This finding considerably simplifies the modeling of lensing by a smooth three-dimensional mass distribution by effectively reducing the problem to one dimension, as it is sufficient to know the mass distribution along a straight light ray.
NASA Astrophysics Data System (ADS)
Mullet, B.; Segall, P.
2017-12-01
Explosive volcanic eruptions can exhibit abrupt changes in physical behavior. In the most extreme cases, high rates of mass discharge are interspaced by dramatic drops in activity and periods of quiescence. Simple models predict exponential decay in magma chamber pressure, leading to a gradual tapering of eruptive flux. Abrupt changes in eruptive flux therefore indicate that relief of chamber pressure cannot be the only control of the evolution of such eruptions. We present a simplified physics-based model of conduit flow during an explosive volcanic eruption that attempts to predict stress-induced conduit collapse linked to co-eruptive pressure loss. The model couples a simple two phase (gas-melt) 1-D conduit solution of the continuity and momentum equations with a Mohr-Coulomb failure condition for the conduit wall rock. First order models of volatile exsolution (i.e. phase mass transfer) and fragmentation are incorporated. The interphase interaction force changes dramatically between flow regimes, so smoothing of this force is critical for realistic results. Reductions in the interphase force lead to significant relative phase velocities, highlighting the deficiency of homogenous flow models. Lateral gas loss through conduit walls is incorporated using a membrane-diffusion model with depth dependent wall rock permeability. Rapid eruptive flux results in a decrease of chamber and conduit pressure, which leads to a critical deviatoric stress condition at the conduit wall. Analogous stress distributions have been analyzed for wellbores, where much work has been directed at determining conditions that lead to wellbore failure using Mohr-Coulomb failure theory. We extend this framework to cylindrical volcanic conduits, where large deviatoric stresses can develop co-eruptively leading to multiple distinct failure regimes depending on principal stress orientations. These failure regimes are categorized and possible implications for conduit flow are discussed, including cessation of eruption.
Estimation of renal allograft half-life: fact or fiction?
Azancot, M Antonieta; Cantarell, Carme; Perelló, Manel; Torres, Irina B; Serón, Daniel; Seron, Daniel; Moreso, Francesc; Arias, Manuel; Campistol, Josep M; Curto, Jordi; Hernandez, Domingo; Morales, José M; Sanchez-Fructuoso, Ana; Abraira, Victor
2011-09-01
Renal allograft half-life time (t½) is the most straightforward representation of long-term graft survival. Since some statistical models overestimate this parameter, we compare different approaches to evaluate t½. Patients with a 1-year functioning graft transplanted in Spain during 1990, 1994, 1998 and 2002 were included. Exponential, Weibull, gamma, lognormal and log-logistic models censoring the last year of follow-up were evaluated. The goodness of fit of these models was evaluated according to the Cox-Snell residuals and the Akaike's information criterion (AIC) was employed to compare these models. We included 4842 patients. Real t½ in 1990 was 14.2 years. Median t½ (95% confidence interval) in 1990 and 2002 was 15.8 (14.2-17.5) versus 52.6 (35.6-69.5) according to the exponential model (P < 0.001). No differences between 1990 and 2002 were observed when t½ was estimated with the other models. In 1990 and 2002, t½ was 14.0 (13.1-15.0) versus 18.0 (13.7-22.4) according to Weibull, 15.5 (13.9-17.1) versus 19.1 (15.6-22.6) according to gamma, 14.4 (13.3-15.6) versus 18.3 (14.2-22.3) according to the log-logistic and 15.2 (13.8-16.6) versus 18.8 (15.3-22.3) according to the lognormal models. The AIC confirmed that the exponential model had the lowest goodness of fit, while the other models yielded a similar result. The exponential model overestimates t½, especially in cohorts of patients with a short follow-up, while any of the other studied models allow a better estimation even in cohorts with short follow-up.
OMFIT Tokamak Profile Data Fitting and Physics Analysis
Logan, N. C.; Grierson, B. A.; Haskey, S. R.; ...
2018-01-22
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
OMFIT Tokamak Profile Data Fitting and Physics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Logan, N. C.; Grierson, B. A.; Haskey, S. R.
Here, One Modeling Framework for Integrated Tasks (OMFIT) has been used to develop a consistent tool for interfacing with, mapping, visualizing, and fitting tokamak profile measurements. OMFIT is used to integrate the many diverse diagnostics on multiple tokamak devices into a regular data structure, consistently applying spatial and temporal treatments to each channel of data. Tokamak data are fundamentally time dependent and are treated so from the start, with front-loaded and logic-based manipulations such as filtering based on the identification of edge-localized modes (ELMs) that commonly scatter data. Fitting is general in its approach, and tailorable in its application inmore » order to address physics constraints and handle the multiple spatial and temporal scales involved. Although community standard one-dimensional fitting is supported, including scale length–fitting and fitting polynomial-exponential blends to capture the H-mode pedestal, OMFITprofiles includes two-dimensional (2-D) fitting using bivariate splines or radial basis functions. These 2-D fits produce regular evolutions in time, removing jitter that has historically been smoothed ad hoc in transport applications. Profiles interface directly with a wide variety of models within the OMFIT framework, providing the inputs for TRANSP, kinetic-EFIT 2-D equilibrium, and GPEC three-dimensional equilibrium calculations. he OMFITprofiles tool’s rapid and comprehensive analysis of dynamic plasma profiles thus provides the critical link between raw tokamak data and simulations necessary for physics understanding.« less
Modeling the Role of Dislocation Substructure During Class M and Exponential Creep. Revised
NASA Technical Reports Server (NTRS)
Raj, S. V.; Iskovitz, Ilana Seiden; Freed, A. D.
1995-01-01
The different substructures that form in the power-law and exponential creep regimes for single phase crystalline materials under various conditions of stress, temperature and strain are reviewed. The microstructure is correlated both qualitatively and quantitatively with power-law and exponential creep as well as with steady state and non-steady state deformation behavior. These observations suggest that creep is influenced by a complex interaction between several elements of the microstructure, such as dislocations, cells and subgrains. The stability of the creep substructure is examined in both of these creep regimes during stress and temperature change experiments. These observations are rationalized on the basis of a phenomenological model, where normal primary creep is interpreted as a series of constant structure exponential creep rate-stress relationships. The implications of this viewpoint on the magnitude of the stress exponent and steady state behavior are discussed. A theory is developed to predict the macroscopic creep behavior of a single phase material using quantitative microstructural data. In this technique the thermally activated deformation mechanisms proposed by dislocation physics are interlinked with a previously developed multiphase, three-dimensional. dislocation substructure creep model. This procedure leads to several coupled differential equations interrelating macroscopic creep plasticity with microstructural evolution.
Kartalis, Nikolaos; Manikis, Georgios C; Loizou, Louiza; Albiin, Nils; Zöllner, Frank G; Del Chiaro, Marco; Marias, Kostas; Papanikolaou, Nikolaos
2016-01-01
To compare two Gaussian diffusion-weighted MRI (DWI) models including mono-exponential and bi-exponential, with the non-Gaussian kurtosis model in patients with pancreatic ductal adenocarcinoma. After written informed consent, 15 consecutive patients with pancreatic ductal adenocarcinoma underwent free-breathing DWI (1.5T, b-values: 0, 50, 150, 200, 300, 600 and 1000 s/mm 2 ). Mean values of DWI-derived metrics ADC, D, D*, f, K and D K were calculated from multiple regions of interest in all tumours and non-tumorous parenchyma and compared. Area under the curve was determined for all metrics. Mean ADC and D K showed significant differences between tumours and non-tumorous parenchyma (both P < 0.001). Area under the curve for ADC, D, D*, f, K, and D K were 0.77, 0.52, 0.53, 0.62, 0.42, and 0.84, respectively. ADC and D K could differentiate tumours from non-tumorous parenchyma with the latter showing a higher diagnostic accuracy. Correction for kurtosis effects has the potential to increase the diagnostic accuracy of DWI in patients with pancreatic ductal adenocarcinoma.
NASA Astrophysics Data System (ADS)
Cao, Jinde; Wang, Yanyan
2010-05-01
In this paper, the bi-periodicity issue is discussed for Cohen-Grossberg-type (CG-type) bidirectional associative memory (BAM) neural networks (NNs) with time-varying delays and standard activation functions. It is shown that the model considered in this paper has two periodic orbits located in saturation regions and they are locally exponentially stable. Meanwhile, some conditions are derived to ensure that, in any designated region, the model has a locally exponentially stable or globally exponentially attractive periodic orbit located in it. As a special case of bi-periodicity, some results are also presented for the system with constant external inputs. Finally, four examples are given to illustrate the effectiveness of the obtained results.
NASA Astrophysics Data System (ADS)
Song, Qiankun; Cao, Jinde
2007-05-01
A bidirectional associative memory neural network model with distributed delays is considered. By constructing a new Lyapunov functional, employing the homeomorphism theory, M-matrix theory and the inequality (a[greater-or-equal, slanted]0,bk[greater-or-equal, slanted]0,qk>0 with , and r>1), a sufficient condition is obtained to ensure the existence, uniqueness and global exponential stability of the equilibrium point for the model. Moreover, the exponential converging velocity index is estimated, which depends on the delay kernel functions and the system parameters. The results generalize and improve the earlier publications, and remove the usual assumption that the activation functions are bounded . Two numerical examples are given to show the effectiveness of the obtained results.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome.A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model.The overall tumor control rate was 94.1% in the 36-month (range 18-87 months) follow-up period (mean volume change of -43.3%). Volume regression (mean decrease of -50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of -3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9).Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled.
Fourier Transforms of Pulses Containing Exponential Leading and Trailing Profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warshaw, S I
2001-07-15
In this monograph we discuss a class of pulse shapes that have exponential rise and fall profiles, and evaluate their Fourier transforms. Such pulses can be used as models for time-varying processes that produce an initial exponential rise and end with the exponential decay of a specified physical quantity. Unipolar examples of such processes include the voltage record of an increasingly rapid charge followed by a damped discharge of a capacitor bank, and the amplitude of an electromagnetic pulse produced by a nuclear explosion. Bipolar examples include acoustic N waves propagating for long distances in the atmosphere that have resultedmore » from explosions in the air, and sonic booms generated by supersonic aircraft. These bipolar pulses have leading and trailing edges that appear to be exponential in character. To the author's knowledge the Fourier transforms of such pulses are not generally well-known or tabulated in Fourier transform compendia, and it is the purpose of this monograph to derive and present these transforms. These Fourier transforms are related to a definite integral of a ratio of exponential functions, whose evaluation we carry out in considerable detail. From this result we derive the Fourier transforms of other related functions. In all Figures showing plots of calculated curves, the actual numbers used for the function parameter values and dependent variables are arbitrary and non-dimensional, and are not identified with any particular physical phenomenon or model.« less
Theory of many-body localization in periodically driven systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abanin, Dmitry A., E-mail: dabanin@gmail.com; De Roeck, Wojciech; Huveneers, François
We present a theory of periodically driven, many-body localized (MBL) systems. We argue that MBL persists under periodic driving at high enough driving frequency: The Floquet operator (evolution operator over one driving period) can be represented as an exponential of an effective time-independent Hamiltonian, which is a sum of quasi-local terms and is itself fully MBL. We derive this result by constructing a sequence of canonical transformations to remove the time-dependence from the original Hamiltonian. When the driving evolves smoothly in time, the theory can be sharpened by estimating the probability of adiabatic Landau–Zener transitions at many-body level crossings. Inmore » all cases, we argue that there is delocalization at sufficiently low frequency. We propose a phase diagram of driven MBL systems.« less
Self-excitation of a nonlinear scalar field in a random medium
Zeldovich, Ya. B.; Molchanov, S. A.; Ruzmaikin, A. A.; Sokoloff, D. D.
1987-01-01
We discuss the evolution in time of a scalar field under the influence of a random potential and diffusion. The cases of a short-correlation in time and of stationary potentials are considered. In a linear approximation and for sufficiently weak diffusion, the statistical moments of the field grow exponentially in time at growth rates that progressively increase with the order of the moment; this indicates the intermittent nature of the field. Nonlinearity halts this growth and in some cases can destroy the intermittency. However, in many nonlinear situations the intermittency is preserved: high, persistent peaks of the field exist against the background of a smooth field distribution. These widely spaced peaks may make a major contribution to the average characteristics of the field. PMID:16593872
A new formation control of multiple underactuated surface vessels
NASA Astrophysics Data System (ADS)
Xie, Wenjing; Ma, Baoli; Fernando, Tyrone; Iu, Herbert Ho-Ching
2018-05-01
This work investigates a new formation control problem of multiple underactuated surface vessels. The controller design is based on input-output linearisation technique, graph theory, consensus idea and some nonlinear tools. The proposed smooth time-varying distributed control law guarantees that the multiple underactuated surface vessels globally exponentially converge to some desired geometric shape, which is especially centred at the initial average position of vessels. Furthermore, the stability analysis of zero dynamics proves that the orientations of vessels tend to some constants that are dependent on the initial values of vessels, and the velocities and control inputs of the vessels decay to zero. All the results are obtained under the communication scenarios of static directed balanced graph with a spanning tree. Effectiveness of the proposed distributed control scheme is demonstrated using a simulation example.
Bell, C; Paterson, D H; Kowalchuk, J M; Padilla, J; Cunningham, D A
2001-09-01
We compared estimates for the phase 2 time constant (tau) of oxygen uptake (VO2) during moderate- and heavy-intensity exercise, and the slow component of VO2 during heavy-intensity exercise using previously published exponential models. Estimates for tau and the slow component were different (P < 0.05) among models. For moderate-intensity exercise, a two-component exponential model, or a mono-exponential model fitted from 20 s to 3 min were best. For heavy-intensity exercise, a three-component model fitted throughout the entire 6 min bout of exercise, or a two-component model fitted from 20 s were best. When the time delays for the two- and three-component models were equal the best statistical fit was obtained; however, this model produced an inappropriately low DeltaVO2/DeltaWR (WR, work rate) for the projected phase 2 steady state, and the estimate of phase 2 tau was shortened compared with other models. The slow component was quantified as the difference between VO2 at end-exercise (6 min) and at 3 min (DeltaVO2 (6-3 min)); 259 ml x min(-1)), and also using the phase 3 amplitude terms (truncated to end-exercise) from exponential fits (409-833 ml x min(-1)). Onset of the slow component was identified by the phase 3 time delay parameter as being of delayed onset approximately 2 min (vs. arbitrary 3 min). Using this delay DeltaVO2 (6-2 min) was approximately 400 ml x min(-1). Use of valid consistent methods to estimate tau and the slow component in exercise are needed to advance physiological understanding.
2015-01-01
Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan
2014-10-01
Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.
The Dark Side of the Moebius Strip.
ERIC Educational Resources Information Center
Schwarz, Gideon E.
1990-01-01
Discussed are various models proposed for the Moebius strip. Included are a discussion of a smooth flat model and two smooth flat algebraic models, some results concerning the shortest Moebius strip, the Moebius strip of least elastic energy, and some observations on real-world Moebius strips. (KR)
Johnson, Ian R.; Thornley, John H. M.; Frantz, Jonathan M.; Bugbee, Bruce
2010-01-01
Background and Aims The distribution of photosynthetic enzymes, or nitrogen, through the canopy affects canopy photosynthesis, as well as plant quality and nitrogen demand. Most canopy photosynthesis models assume an exponential distribution of nitrogen, or protein, through the canopy, although this is rarely consistent with experimental observation. Previous optimization schemes to derive the nitrogen distribution through the canopy generally focus on the distribution of a fixed amount of total nitrogen, which fails to account for the variation in both the actual quantity of nitrogen in response to environmental conditions and the interaction of photosynthesis and respiration at similar levels of complexity. Model A model of canopy photosynthesis is presented for C3 and C4 canopies that considers a balanced approach between photosynthesis and respiration as well as plant carbon partitioning. Protein distribution is related to irradiance in the canopy by a flexible equation for which the exponential distribution is a special case. The model is designed to be simple to parameterize for crop, pasture and ecosystem studies. The amount and distribution of protein that maximizes canopy net photosynthesis is calculated. Key Results The optimum protein distribution is not exponential, but is quite linear near the top of the canopy, which is consistent with experimental observations. The overall concentration within the canopy is dependent on environmental conditions, including the distribution of direct and diffuse components of irradiance. Conclusions The widely used exponential distribution of nitrogen or protein through the canopy is generally inappropriate. The model derives the optimum distribution with characteristics that are consistent with observation, so overcoming limitations of using the exponential distribution. Although canopies may not always operate at an optimum, optimization analysis provides valuable insight into plant acclimation to environmental conditions. Protein distribution has implications for the prediction of carbon assimilation, plant quality and nitrogen demand. PMID:20861273
Time-domain electromagnetic soundings collected in Dawson County, Nebraska, 2007-09
Payne, Jason; Teeple, Andrew
2011-01-01
Between April 2007 and November 2009, the U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, collected time-domain electro-magnetic (TDEM) soundings at 14 locations in Dawson County, Nebraska. The TDEM soundings provide information pertaining to the hydrogeology at each of 23 sites at the 14 locations; 30 TDEM surface geophysical soundings were collected at the 14 locations to develop smooth and layered-earth resistivity models of the subsurface at each site. The soundings yield estimates of subsurface electrical resistivity; variations in subsurface electrical resistivity can be correlated with hydrogeologic and stratigraphic units. Results from each sounding were used to calculate resistivity to depths of approximately 90-130 meters (depending on loop size) below the land surface. Geonics Protem 47 and 57 systems, as well as the Alpha Geoscience TerraTEM, were used to collect the TDEM soundings (voltage data from which resistivity is calculated). For each sounding, voltage data were averaged and evaluated statistically before inversion (inverse modeling). Inverse modeling is the process of creating an estimate of the true distribution of subsurface resistivity from the mea-sured apparent resistivity obtained from TDEM soundings. Smooth and layered-earth models were generated for each sounding. A smooth model is a vertical delineation of calculated apparent resistivity that represents a non-unique estimate of the true resistivity. Ridge regression (Interpex Limited, 1996) was used by the inversion software in a series of iterations to create a smooth model consisting of 24-30 layers for each sounding site. Layered-earth models were then generated based on results of smooth modeling. The layered-earth models are simplified (generally 1 to 6 layers) to represent geologic units with depth. Throughout the area, the layered-earth models range from 2 to 4 layers, depending on observed inflections in the raw data and smooth model inversions. The TDEM data collected were considered good results on the basis of root mean square errors calculated after inversion modeling, comparisons with borehole geophysical logging, and repeatability.
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping
2011-01-01
Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359
Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.
Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C
2011-10-06
Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Exponential inflation with F (R ) gravity
NASA Astrophysics Data System (ADS)
Oikonomou, V. K.
2018-03-01
In this paper, we shall consider an exponential inflationary model in the context of vacuum F (R ) gravity. By using well-known reconstruction techniques, we shall investigate which F (R ) gravity can realize the exponential inflation scenario at leading order in terms of the scalar curvature, and we shall calculate the slow-roll indices and the corresponding observational indices, in the context of slow-roll inflation. We also provide some general formulas of the slow-roll and the corresponding observational indices in terms of the e -foldings number. In addition, for the calculation of the slow-roll and of the observational indices, we shall consider quite general formulas, for which it is not necessary for the assumption that all the slow-roll indices are much smaller than unity to hold true. Finally, we investigate the phenomenological viability of the model by comparing it with the latest Planck and BICEP2/Keck-Array observational data. As we demonstrate, the model is compatible with the current observational data for a wide range of the free parameters of the model.
NASA Astrophysics Data System (ADS)
Zhang, Fode; Shi, Yimin; Wang, Ruibing
2017-02-01
In the information geometry suggested by Amari (1985) and Amari et al. (1987), a parametric statistical model can be regarded as a differentiable manifold with the parameter space as a coordinate system. Note that the q-exponential distribution plays an important role in Tsallis statistics (see Tsallis, 2009), this paper investigates the geometry of the q-exponential distribution with dependent competing risks and accelerated life testing (ALT). A copula function based on the q-exponential function, which can be considered as the generalized Gumbel copula, is discussed to illustrate the structure of the dependent random variable. Employing two iterative algorithms, simulation results are given to compare the performance of estimations and levels of association under different hybrid progressively censoring schemes (HPCSs).
Hypersurface Homogeneous Cosmological Model in Modified Theory of Gravitation
NASA Astrophysics Data System (ADS)
Katore, S. D.; Hatkar, S. P.; Baxi, R. J.
2016-12-01
We study a hypersurface homogeneous space-time in the framework of the f (R, T) theory of gravitation in the presence of a perfect fluid. Exact solutions of field equations are obtained for exponential and power law volumetric expansions. We also solve the field equations by assuming the proportionality relation between the shear scalar (σ ) and the expansion scalar (θ ). It is observed that in the exponential model, the universe approaches isotropy at large time (late universe). The investigated model is notably accelerating and expanding. The physical and geometrical properties of the investigated model are also discussed.
Performance and state-space analyses of systems using Petri nets
NASA Technical Reports Server (NTRS)
Watson, James Francis, III
1992-01-01
The goal of any modeling methodology is to develop a mathematical description of a system that is accurate in its representation and also permits analysis of structural and/or performance properties. Inherently, trade-offs exist between the level detail in the model and the ease with which analysis can be performed. Petri nets (PN's), a highly graphical modeling methodology for Discrete Event Dynamic Systems, permit representation of shared resources, finite capacities, conflict, synchronization, concurrency, and timing between state changes. By restricting the state transition time delays to the family of exponential density functions, Markov chain analysis of performance problems is possible. One major drawback of PN's is the tendency for the state-space to grow rapidly (exponential complexity) compared to increases in the PN constructs. It is the state space, or the Markov chain obtained from it, that is needed in the solution of many problems. The theory of state-space size estimation for PN's is introduced. The problem of state-space size estimation is defined, its complexities are examined, and estimation algorithms are developed. Both top-down and bottom-up approaches are pursued, and the advantages and disadvantages of each are described. Additionally, the author's research in non-exponential transition modeling for PN's is discussed. An algorithm for approximating non-exponential transitions is developed. Since only basic PN constructs are used in the approximation, theory already developed for PN's remains applicable. Comparison to results from entropy theory show the transition performance is close to the theoretic optimum. Inclusion of non-exponential transition approximations improves performance results at the expense of increased state-space size. The state-space size estimation theory provides insight and algorithms for evaluating this trade-off.
Chen, Bo-Ching; Lai, Hung-Yu; Juang, Kai-Wei
2012-06-01
To better understand the ability of switchgrass (Panicum virgatum L.), a perennial grass often relegated to marginal agricultural areas with minimal inputs, to remove cadmium, chromium, and zinc by phytoextraction from contaminated sites, the relationship between plant metal content and biomass yield is expressed in different models to predict the amount of metals switchgrass can extract. These models are reliable in assessing the use of switchgrass for phytoremediation of heavy-metal-contaminated sites. In the present study, linear and exponential decay models are more suitable for presenting the relationship between plant cadmium and dry weight. The maximum extractions of cadmium using switchgrass, as predicted by the linear and exponential decay models, approached 40 and 34 μg pot(-1), respectively. The log normal model was superior in predicting the relationship between plant chromium and dry weight. The predicted maximum extraction of chromium by switchgrass was about 56 μg pot(-1). In addition, the exponential decay and log normal models were better than the linear model in predicting the relationship between plant zinc and dry weight. The maximum extractions of zinc by switchgrass, as predicted by the exponential decay and log normal models, were about 358 and 254 μg pot(-1), respectively. To meet the maximum removal of Cd, Cr, and Zn, one can adopt the optimal timing of harvest as plant Cd, Cr, and Zn approach 450 and 526 mg kg(-1), 266 mg kg(-1), and 3022 and 5000 mg kg(-1), respectively. Due to the well-known agronomic characteristics of cultivation and the high biomass production of switchgrass, it is practicable to use switchgrass for the phytoextraction of heavy metals in situ. Copyright © 2012 Elsevier Inc. All rights reserved.
Predictive modeling of slope deposits and comparisons of two small areas in Northern Germany
NASA Astrophysics Data System (ADS)
Shary, Peter A.; Sharaya, Larisa S.; Mitusov, Andrew V.
2017-08-01
Methods for correct quantitative comparison of several terrains are important in the development and use of quantitative landscape evolution models, and they need to introduce specific modeling parameters. We introduce such parameters and compare two small terrains with respect to the link slope-valley for the description of slope deposits (colluvium) in them. We show that colluvium accumulation in small areas cannot be described by linear models and thus introduce non-linear models. Two small areas, Perdoel (0.29 ha) and Bornhöved (3.2 ha), are studied. Slope deposits in the both are mainly in dry valleys, with a total thickness Mtotal up to 2.0 m in Perdoel and up to 1.2 m in Bornhöved. Parent materials are mainly Pleistocene sands aged 30 kyr BP. Exponential models of multiple regression that use a 1-m LiDAR DEM (digital elevation model) explained 70-93% of spatial variability in Mtotal. Parameters DH12 and DV12 of horizontal and vertical distances are introduced that permit to characterize and compare conditions of colluvium formation for various terrains. The study areas differ 3.7 times by the parameter DH12 that describes a horizontal distance from thalwegs at which Mtotal diminishes 2.72 times. DH12 is greater in Bornhöved (29.7 m) than in Perdoel (8.12 m). We relate this difference in DH12 to the distinction between types of the link slope-valley: a regional type if catchment area of a region outside a given small area plays an important role, and a local type when accumulation of colluvium from valley banks within a small area is of more importance. We argue that the link slope-valley is regional in Perdoel and local in Bornhöved. Peaks of colluvium thickness were found on thalwegs of three studied valleys by both direct measurements in a trench, and model surfaces of Mtotal. A hypothesis on the formation mechanism of such peaks is discussed. The parameter DV12 describes a vertical distance from a peak of colluvium thickness along valley bottom at which Mtotal diminishes 2.72 times; values of this parameter differ 1.4 times for the study areas. DV12 is greater in Perdoel (3.0 m) than in Bornhöved (2.1 m) thus indicating more sharp peaks of Mtotal in Bornhöved. Exponential models allow construction of predictive maps of buried Pleistocene surfaces for both the terrains and calculate colluvium volumes with an error 4.2% for Perdoel and 7.1% for Bornhöved. Comparisons of buried and present surfaces showed that the latter are more smoothed, more strongly in valleys where flow branching is increased.
Single-arm phase II trial design under parametric cure models.
Wu, Jianrong
2015-01-01
The current practice of designing single-arm phase II survival trials is limited under the exponential model. Trial design under the exponential model may not be appropriate when a portion of patients are cured. There is no literature available for designing single-arm phase II trials under the parametric cure model. In this paper, a test statistic is proposed, and a sample size formula is derived for designing single-arm phase II trials under a class of parametric cure models. Extensive simulations showed that the proposed test and sample size formula perform very well under different scenarios. Copyright © 2015 John Wiley & Sons, Ltd.
Marias, Kostas; Lambregts, Doenja M. J.; Nikiforaki, Katerina; van Heeswijk, Miriam M.; Bakers, Frans C. H.; Beets-Tan, Regina G. H.
2017-01-01
Purpose The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Material and methods Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. Results All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. Conclusion No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior. PMID:28863161
Manikis, Georgios C; Marias, Kostas; Lambregts, Doenja M J; Nikiforaki, Katerina; van Heeswijk, Miriam M; Bakers, Frans C H; Beets-Tan, Regina G H; Papanikolaou, Nikolaos
2017-01-01
The purpose of this study was to compare the performance of four diffusion models, including mono and bi-exponential both Gaussian and non-Gaussian models, in diffusion weighted imaging of rectal cancer. Nineteen patients with rectal adenocarcinoma underwent MRI examination of the rectum before chemoradiation therapy including a 7 b-value diffusion sequence (0, 25, 50, 100, 500, 1000 and 2000 s/mm2) at a 1.5T scanner. Four different diffusion models including mono- and bi-exponential Gaussian (MG and BG) and non-Gaussian (MNG and BNG) were applied on whole tumor volumes of interest. Two different statistical criteria were recruited to assess their fitting performance, including the adjusted-R2 and Root Mean Square Error (RMSE). To decide which model better characterizes rectal cancer, model selection was relied on Akaike Information Criteria (AIC) and F-ratio. All candidate models achieved a good fitting performance with the two most complex models, the BG and the BNG, exhibiting the best fitting performance. However, both criteria for model selection indicated that the MG model performed better than any other model. In particular, using AIC Weights and F-ratio, the pixel-based analysis demonstrated that tumor areas better described by the simplest MG model in an average area of 53% and 33%, respectively. Non-Gaussian behavior was illustrated in an average area of 37% according to the F-ratio, and 7% using AIC Weights. However, the distributions of the pixels best fitted by each of the four models suggest that MG failed to perform better than any other model in all patients, and the overall tumor area. No single diffusion model evaluated herein could accurately describe rectal tumours. These findings probably can be explained on the basis of increased tumour heterogeneity, where areas with high vascularity could be fitted better with bi-exponential models, and areas with necrosis would mostly follow mono-exponential behavior.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
H.E.S.S. Collaboration; Abdalla, H.; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Angüner, E. O.; Arakawa, M.; Arrieta, M.; Aubert, P.; Backes, M.; Balzer, A.; Barnard, M.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernlöhr, K.; Blackwell, R.; Böttcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Büchele, M.; Bulik, T.; Capasso, M.; Carr, J.; Casanova, S.; Cerruti, M.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chevalier, J.; Chrétien, M.; Coffaro, M.; Colafrancesco, S.; Cologna, G.; Condon, B.; Conrad, J.; Cui, Y.; Davids, I. D.; Decock, J.; Degrange, B.; Deil, C.; Devin, J.; deWilt, P.; Dirson, L.; Djannati-Ataï, A.; Domainko, W.; Donath, A.; Drury, L. O.'C.; Dutson, K.; Dyks, J.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Eschbach, S.; Farnier, C.; Fegan, S.; Fernandes, M. V.; Fiasson, A.; Fontaine, G.; Förster, A.; Funk, S.; Füßling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Hahn, J.; Haupt, M.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hinton, J. A.; Hofmann, W.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Iwasaki, H.; Jacholkowska, A.; Jamrozy, M.; Janiak, M.; Jankowsky, D.; Jankowsky, F.; Jingo, M.; Jogler, T.; Jouvin, L.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzyński, K.; Katsuragawa, M.; Katz, U.; Kerszberg, D.; Khangulyan, D.; Khélifi, B.; Kieffer, M.; King, J.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Kolitzus, D.; Komin, Nu.; Krakau, S.; Kraus, M.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lees, J.-P.; Lefaucheur, J.; Lefranc, V.; Lemière, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Leser, E.; Lohse, T.; Lorentz, M.; Liu, R.; López-Coto, R.; Lypova, I.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Mohrmann, L.; Morå, K.; Moulin, E.; Murach, T.; Nakashima, S.; de Naurois, M.; Niederwanger, F.; Niemiec, J.; Oakes, L.; O'Brien, P.; Odaka, H.; Öttl, S.; Ohm, S.; Ostrowski, M.; Oya, I.; Padovani, M.; Panter, M.; Parsons, R. D.; Paz Arribas, M.; Pekeur, N. W.; Pelletier, G.; Perennes, C.; Petrucci, P.-O.; Peyaud, B.; Piel, Q.; Pita, S.; Poon, H.; Prokhorov, D.; Prokoph, H.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Richter, S.; Rieger, F.; Romoli, C.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Saito, S.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schüssler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seglar-Arroyo, M.; Settimo, M.; Seyffert, A. S.; Shafi, N.; Shilon, I.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stycz, K.; Sushch, I.; Takahashi, T.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tibaldo, L.; Tiziani, D.; Tluczykont, M.; Trichard, C.; Tsuji, N.; Tuffs, R.; Uchiyama, Y.; van der Walt, D. J.; van Eldik, C.; van Rensburg, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Völk, H. J.; Vuillaume, T.; Wadiasingh, Z.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; White, R.; Wierzcholska, A.; Willmann, P.; Wörnlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zanin, R.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Ziegler, A.; Żywucka, N.
2018-04-01
Aims: We study γ-ray emission from the shell-type supernova remnant (SNR) RX J0852.0-4622 to better characterize its spectral properties and its distribution over the SNR. Methods: The analysis of an extended High Energy Spectroscopic System (H.E.S.S.) data set at very high energies (E > 100 GeV) permits detailed studies, as well as spatially resolved spectroscopy, of the morphology and spectrum of the whole RX J0852.0-4622 region. The H.E.S.S. data are combined with archival data from other wavebands and interpreted in the framework of leptonic and hadronic models. The joint Fermi-LAT-H.E.S.S. spectrum allows the direct determination of the spectral characteristics of the parent particle population in leptonic and hadronic scenarios using only GeV-TeV data. Results: An updated analysis of the H.E.S.S. data shows that the spectrum of the entire SNR connects smoothly to the high-energy spectrum measured by Fermi-LAT. The increased data set makes it possible to demonstrate that the H.E.S.S. spectrum deviates significantly from a power law and is well described by both a curved power law and a power law with an exponential cutoff at an energy of Ecut = (6.7 ± 1.2stat ± 1.2syst) TeV. The joint Fermi-LAT-H.E.S.S. spectrum allows the unambiguous identification of the spectral shape as a power law with an exponential cutoff. No significant evidence is found for a variation of the spectral parameters across the SNR, suggesting similar conditions of particle acceleration across the remnant. A simple modeling using one particle population to model the SNR emission demonstrates that both leptonic and hadronic emission scenarios remain plausible. It is also shown that at least a part of the shell emission is likely due to the presence of a pulsar wind nebula around PSR J0855-4644. A FITS image of the region of interest and two text files describing the H.E.S.S. spectrum of RX J0852.0-4622 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A7
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
Muñoz-Cuevas, Marina; Fernández, Pablo S; George, Susan; Pin, Carmen
2010-05-01
The dynamic model for the growth of a bacterial population described by Baranyi and Roberts (J. Baranyi and T. A. Roberts, Int. J. Food Microbiol. 23:277-294, 1994) was applied to model the lag period and exponential growth of Listeria monocytogenes under conditions of fluctuating temperature and water activity (a(w)) values. To model the duration of the lag phase, the dependence of the parameter h(0), which quantifies the amount of work done during the lag period, on the previous and current environmental conditions was determined experimentally. This parameter depended not only on the magnitude of the change between the previous and current environmental conditions but also on the current growth conditions. In an exponentially growing population, any change in the environment requiring a certain amount of work to adapt to the new conditions initiated a lag period that lasted until that work was finished. Observations for several scenarios in which exponential growth was halted by a sudden change in the temperature and/or a(w) were in good agreement with predictions. When a population already in a lag period was subjected to environmental fluctuations, the system was reset with a new lag phase. The work to be done during the new lag phase was estimated to be the workload due to the environmental change plus the unfinished workload from the uncompleted previous lag phase.
Zheng, Lai; Ismail, Karim
2017-05-01
Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of earthquake potential in China
NASA Astrophysics Data System (ADS)
Rong, Yufang
I present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (that is, the probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. I test all three estimates, and another published estimate, against earthquake data. I constructed a special earthquake catalog which combines previous catalogs covering different times. I estimated moment magnitudes for some events using regression relationships that are derived in this study. I used the special catalog to construct the smoothed seismicity model and to test all models retrospectively. In all the models, I adopted a kind of Gutenberg-Richter magnitude distribution with modifications at higher magnitude. The assumed magnitude distribution depends on three parameters: a multiplicative " a-value," the slope or "b-value," and a "corner magnitude" marking a rapid decrease of earthquake rate with magnitude. I assumed the "b-value" to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and declines as a negative power of the epicentral distance out to a few hundred kilometers. I derived the upper magnitude limit from the special catalog, and estimated local "a-values" from smoothed seismicity. I have begun a "prospective" test, and earthquakes since the beginning of 2000 are quite compatible with the model. For the geologic estimations, I adopted the seismic source zones that are used in the published Global Seismic Hazard Assessment Project (GSHAP) model. The zones are divided according to geological, geodetic and seismicity data. Corner magnitudes are estimated from fault length, while fault slip rates and an assumed locking depth determine earthquake rates. The geological model fits the earthquake data better than the GSHAP model. By smoothing geodetic strain rate, another potential model was constructed and tested. I derived the upper magnitude limit from the Special catalog, and assume local "a-values" proportional to geodetic strain rates. "Prospective" tests show that the geodetic strain rate model is quite compatible with earthquakes. By assuming the smoothed seismicity model as a null hypothesis, I tested every other model against it. Test results indicate that the smoothed seismicity model performs best.
Scalar field and time varying cosmological constant in f(R,T) gravity for Bianchi type-I universe
NASA Astrophysics Data System (ADS)
Singh, G. P.; Bishi, Binaya K.; Sahoo, P. K.
2016-04-01
In this article, we have analysed the behaviour of scalar field and cosmological constant in $f(R,T)$ theory of gravity. Here, we have considered the simplest form of $f(R,T)$ i.e. $f(R,T)=R+2f(T)$, where $R$ is the Ricci scalar and $T$ is the trace of the energy momentum tensor and explored the spatially homogeneous and anisotropic Locally Rotationally Symmetric (LRS) Bianchi type-I cosmological model. It is assumed that the Universe is filled with two non-interacting matter sources namely scalar field (normal or phantom) with scalar potential and matter contribution due to $f(R,T)$ action. We have discussed two cosmological models according to power law and exponential law of the volume expansion along with constant and exponential scalar potential as sub models. Power law models are compatible with normal (quintessence) and phantom scalar field whereas exponential volume expansion models are compatible with only normal (quintessence) scalar field. The values of cosmological constant in our models are in agreement with the observational results. Finally, we have discussed some physical and kinematical properties of both the models.
NASA Astrophysics Data System (ADS)
Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.
2012-11-01
We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.
Looking for Connections between Linear and Exponential Functions
ERIC Educational Resources Information Center
Lo, Jane-Jane; Kratky, James L.
2012-01-01
Students frequently have difficulty determining whether a given real-life situation is best modeled as a linear relationship or as an exponential relationship. One root of such difficulty is the lack of deep understanding of the very concept of "rate of change." The authors will provide a lesson that allows students to reveal their misconceptions…
A Parametric Model for Barred Equilibrium Beach Profiles
2014-05-10
to shallow water. Bodge (1992) and Komar and McDougal (1994) suggested an exponential form as a preferred solution that exhibited finite slope at the...applications. J. Coast. Res. 7, 53–84. Komar, P.D., McDougal ,W.G., 1994. The analysis of beach profiles and nearshore processes using the exponential beach
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
Local perturbations perturb—exponentially-locally
NASA Astrophysics Data System (ADS)
De Roeck, W.; Schütz, M.
2015-06-01
We elaborate on the principle that for gapped quantum spin systems with local interaction, "local perturbations [in the Hamiltonian] perturb locally [the groundstate]." This principle was established by Bachmann et al. [Commun. Math. Phys. 309, 835-871 (2012)], relying on the "spectral flow technique" or "quasi-adiabatic continuation" [M. B. Hastings, Phys. Rev. B 69, 104431 (2004)] to obtain locality estimates with sub-exponential decay in the distance to the spatial support of the perturbation. We use ideas of Hamza et al. [J. Math. Phys. 50, 095213 (2009)] to obtain similarly a transformation between gapped eigenvectors and their perturbations that is local with exponential decay. This allows to improve locality bounds on the effect of perturbations on the low lying states in certain gapped models with a unique "bulk ground state" or "topological quantum order." We also give some estimate on the exponential decay of correlations in models with impurities where some relevant correlations decay faster than one would naively infer from the global gap of the system, as one also expects in disordered systems with a localized groundstate.
Modified Kneser-Ney Smoothing of n-Gram Models
NASA Technical Reports Server (NTRS)
James, Frankie
2000-01-01
This report examines a series of tests that were performed on variations of the modified Kneser Ney smoothing model outlined in a study by Chen and Goodman. We explore several different ways of choosing and setting the discounting parameters, as well as the exclusion of singleton contexts at various levels of the model.
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
Quantifying Uncertainties in N2O Emission Due to N Fertilizer Application in Cultivated Areas
Philibert, Aurore; Loyce, Chantal; Makowski, David
2012-01-01
Nitrous oxide (N2O) is a greenhouse gas with a global warming potential approximately 298 times greater than that of CO2. In 2006, the Intergovernmental Panel on Climate Change (IPCC) estimated N2O emission due to synthetic and organic nitrogen (N) fertilization at 1% of applied N. We investigated the uncertainty on this estimated value, by fitting 13 different models to a published dataset including 985 N2O measurements. These models were characterized by (i) the presence or absence of the explanatory variable “applied N”, (ii) the function relating N2O emission to applied N (exponential or linear function), (iii) fixed or random background (i.e. in the absence of N application) N2O emission and (iv) fixed or random applied N effect. We calculated ranges of uncertainty on N2O emissions from a subset of these models, and compared them with the uncertainty ranges currently used in the IPCC-Tier 1 method. The exponential models outperformed the linear models, and models including one or two random effects outperformed those including fixed effects only. The use of an exponential function rather than a linear function has an important practical consequence: the emission factor is not constant and increases as a function of applied N. Emission factors estimated using the exponential function were lower than 1% when the amount of N applied was below 160 kg N ha−1. Our uncertainty analysis shows that the uncertainty range currently used by the IPCC-Tier 1 method could be reduced. PMID:23226430
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
McGee, Monnie; Chen, Zhongxue
2006-01-01
There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
One dark matter mystery: halos in the cosmic web
NASA Astrophysics Data System (ADS)
Gaite, Jose
2015-01-01
The current cold dark matter cosmological model explains the large scale cosmic web structure but is challenged by the observation of a relatively smooth distribution of matter in galactic clusters. We consider various aspects of modeling the dark matter around galaxies as distributed in smooth halos and, especially, the smoothness of the dark matter halos seen in N-body cosmological simulations. We conclude that the problems of the cold dark matter cosmology on small scales are more serious than normally admitted.
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Xue, J L; Ma, J Z; Louis, T A; Collins, A J
2001-12-01
As the United States end-stage renal disease (ESRD) program enters the new millennium, the continued growth of the ESRD population poses a challenge for policy makers, health care providers, and financial planners. To assist in future planning for the ESRD program, the growth of patient numbers and Medicare costs was forecasted to the year 2010 by modeling of historical data from 1982 through 1997. A stepwise autoregressive method and exponential smoothing models were used. The forecasting models for ESRD patient numbers demonstrated mean errors of -0.03 to 1.03%, relative to the observed values. The model for Medicare payments demonstrated -0.12% mean error. The R(2) values for the forecasting models ranged from 99.09 to 99.98%. On the basis of trends in patient numbers, this forecast projects average annual growth of the ESRD populations of approximately 4.1% for new patients, 6.4% for long-term ESRD patients, 7.1% for dialysis patients, 6.1% for patients with functioning transplants, and 8.2% for patients on waiting lists for transplants, as well as 7.7% for Medicare expenditures. The numbers of patients with ESRD in 2010 are forecasted to be 129,200 +/- 7742 (95% confidence limits) new patients, 651,330 +/- 15,874 long-term ESRD patients, 520,240 +/- 25,609 dialysis patients, 178,806 +/- 4349 patients with functioning transplants, and 95,550 +/- 5478 patients on waiting lists. The forecasted Medicare expenditures are projected to increase to $28.3 +/- 1.7 billion by 2010. These projections are subject to many factors that may alter the actual growth, compared with the historical patterns. They do, however, provide a basis for discussing the future growth of the ESRD program and how the ESRD community can meet the challenges ahead.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.
Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K
2012-06-01
In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.
Zhao, Kaihong
2018-12-01
In this paper, we study the n-species impulsive Gilpin-Ayala competition model with discrete and distributed time delays. The existence of positive periodic solution is proved by employing the fixed point theorem on cones. By constructing appropriate Lyapunov functional, we also obtain the global exponential stability of the positive periodic solution of this system. As an application, an interesting example is provided to illustrate the validity of our main results.
A mechanical model of bacteriophage DNA ejection
NASA Astrophysics Data System (ADS)
Arun, Rahul; Ghosal, Sandip
2017-08-01
Single molecule experiments on bacteriophages show an exponential scaling for the dependence of mobility on the length of DNA within the capsid. It has been suggested that this could be due to the ;capstan mechanism; - the exponential amplification of friction forces that result when a rope is wound around a cylinder as in a ship's capstan. Here we describe a desktop experiment that illustrates the effect. Though our model phage is a million times larger, it exhibits the same scaling observed in single molecule experiments.
A new approach to the extraction of single exponential diode model parameters
NASA Astrophysics Data System (ADS)
Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.
2018-06-01
A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.
ERIC Educational Resources Information Center
Casstevens, Thomas W.; And Others
This document consists of five units which all view applications of mathematics to American politics. The first three view calculus applications, the last two deal with applications of algebra. The first module is geared to teach a student how to: 1) compute estimates of the value of the parameters in negative exponential models; and draw…
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Benner, D. C.; Tomasko, M. G.; Fink, U.; Kerola, D.
1990-01-01
Transmission measurements made on near-infrared laboratory methane spectra have previously been fit using a Malkmus band model. The laboratory spectra were obtained in three groups at temperatures averaging 112, 188, and 295 K; band model fitting was done separately for each temperature group. These band model parameters cannot be used directly in scattering atmosphere model computations, so an exponential sum model is being developed which includes pressure and temperature fitting parameters. The goal is to obtain model parameters by least square fits at 10/cm intervals from 3800 to 9100/cm. These results will be useful in the interpretation of current planetary spectra and also NIMS spectra of Jupiter anticipated from the Galileo mission.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Exponential Stellar Disks in Low Surface Brightness Galaxies: A Critical Test of Viscous Evolution
NASA Astrophysics Data System (ADS)
Bell, Eric F.
2002-12-01
Viscous redistribution of mass in Milky Way-type galactic disks is an appealing way of generating an exponential stellar profile over many scale lengths, almost independent of initial conditions, requiring only that the viscous timescale and star formation timescale are approximately equal. However, galaxies with solid-body rotation curves cannot undergo viscous evolution. Low surface brightness (LSB) galaxies have exponential surface brightness profiles, yet have slowly rising, nearly solid-body rotation curves. Because of this, viscous evolution may be inefficient in LSB galaxies: the exponential profiles, instead, would give important insight into initial conditions for galaxy disk formation. Using star formation laws from the literature and tuning the efficiency of viscous processes to reproduce an exponential stellar profile in Milky Way-type galaxies, I test the role of viscous evolution in LSB galaxies. Under the conservative and not unreasonable condition that LSB galaxies are gravitationally unstable for at least a part of their lives, I find that it is impossible to rule out a significant role for viscous evolution. This type of model still offers an attractive way of producing exponential disks, even in LSB galaxies with slowly rising rotation curves.
Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field
NASA Astrophysics Data System (ADS)
Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi
2018-02-01
We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.
Observational constraints on tachyonic chameleon dark energy model
NASA Astrophysics Data System (ADS)
Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.
2018-03-01
It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.
Cosmological models with a hybrid scale factor in an extended gravity theory
NASA Astrophysics Data System (ADS)
Mishra, B.; Tripathy, S. K.; Tarai, Sankarsan
2018-03-01
A general formalism to investigate Bianchi type V Ih universes is developed in an extended theory of gravity. A minimally coupled geometry and matter field is considered with a rescaled function of f(R,T) substituted in place of the Ricci scalar R in the geometrical action. Dynamical aspects of the models are discussed by using a hybrid scale factor (HSF) that behaves as power law in an initial epoch and as an exponential form at late epoch. The power law behavior and the exponential behavior appear as two extreme cases of the present model.
Yu, Yi-Lin; Yang, Yun-Ju; Lin, Chin; Hsieh, Chih-Chuan; Li, Chiao-Zhu; Feng, Shao-Wei; Tang, Chi-Tun; Chung, Tzu-Tsao; Ma, Hsin-I; Chen, Yuan-Hao; Ju, Da-Tong; Hueng, Dueng-Yuan
2017-01-01
Abstract Tumor control rates of pituitary adenomas (PAs) receiving adjuvant CyberKnife stereotactic radiosurgery (CK SRS) are high. However, there is currently no uniform way to estimate the time course of the disease. The aim of this study was to analyze the volumetric responses of PAs after CK SRS and investigate the application of an exponential decay model in calculating an accurate time course and estimation of the eventual outcome. A retrospective review of 34 patients with PAs who received adjuvant CK SRS between 2006 and 2013 was performed. Tumor volume was calculated using the planimetric method. The percent change in tumor volume and tumor volume rate of change were compared at median 4-, 10-, 20-, and 36-month intervals. Tumor responses were classified as: progression for >15% volume increase, regression for ≤15% decrease, and stabilization for ±15% of the baseline volume at the time of last follow-up. For each patient, the volumetric change versus time was fitted with an exponential model. The overall tumor control rate was 94.1% in the 36-month (range 18–87 months) follow-up period (mean volume change of −43.3%). Volume regression (mean decrease of −50.5%) was demonstrated in 27 (79%) patients, tumor stabilization (mean change of −3.7%) in 5 (15%) patients, and tumor progression (mean increase of 28.1%) in 2 (6%) patients (P = 0.001). Tumors that eventually regressed or stabilized had a temporary volume increase of 1.07% and 41.5% at 4 months after CK SRS, respectively (P = 0.017). The tumor volume estimated using the exponential fitting equation demonstrated high positive correlation with the actual volume calculated by magnetic resonance imaging (MRI) as tested by Pearson correlation coefficient (0.9). Transient progression of PAs post-CK SRS was seen in 62.5% of the patients receiving CK SRS, and it was not predictive of eventual volume regression or progression. A three-point exponential model is of potential predictive value according to relative distribution. An exponential decay model can be used to calculate the time course of tumors that are ultimately controlled. PMID:28121913
Role of ROCK expression in gallbladder smooth muscle contraction.
Wang, Bin; Ding, You-Ming; Wang, Chun-Tao; Wang, Wei-Xing
2015-08-01
Cholelithiasis is a common medical condition whose incidence rate is increasing yearly, while its pathogenesis has yet to be elucidated. The present study assessed the expression of Rho-kinase (ROCK) in gallbladder smooth muscles and its effect on the contractile function of gallbladder smooth muscles during gallstone formation. Thirty male guinea pigs were randomly divided into three groups: The control group, the gallstone model group and the fasudil interference group. The fasting volume (FV) and bile capacity of the gallbladder (FB) as well as the total cholesterol (TC) and triglyceride (TG) contents of the gallbladder bile were determined. In addition, the gallbladder was dissected to identify whether any gallstones had formed. Part of the gallbladder tissue specimens were used for immunohistochemical analysis of ROCK expression in gallbladder smooth muscles. The results showed that four guinea pigs in the model group and eight in the fasudil group displayed gallstone formation, while there was no gallstone formation in the control group. The FV and FB were significantly increased in the model and fasudil groups. Similarly, the TC and TG contents of gallbladder bile were increased in these groups. The positive expression rate of ROCK in gallbladder smooth muscles in the model and fasudil groups was significantly reduced compared with that in the control group (P<0.05). The results of the present study indicated that the reduction of ROCK expression in guinea pig gallbladder smooth muscles weakened gallbladder contraction and thereby promoted gallstone formation.
AN EMPIRICAL FORMULA FOR THE DISTRIBUTION FUNCTION OF A THIN EXPONENTIAL DISC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Sanjib; Bland-Hawthorn, Joss
2013-08-20
An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo model fitting. The formula has been shown to work for flat, rising, and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproducemore » velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.« less
Locality of the Thomas-Fermi-von Weizsäcker Equations
NASA Astrophysics Data System (ADS)
Nazar, F. Q.; Ortner, C.
2017-06-01
We establish a pointwise stability estimate for the Thomas-Fermi-von Weiz-säcker (TFW) model, which demonstrates that a local perturbation of a nuclear arrangement results also in a local response in the electron density and electrostatic potential. The proof adapts the arguments for existence and uniqueness of solutions to the TFW equations in the thermodynamic limit by Catto et al. (The mathematical theory of thermodynamic limits: Thomas-Fermi type models. Oxford mathematical monographs. The Clarendon Press, Oxford University Press, New York, 1998). To demonstrate the utility of this combined locality and stability result we derive several consequences, including an exponential convergence rate for the thermodynamic limit, partition of total energy into exponentially localised site energies (and consequently, exponential locality of forces), and generalised and strengthened results on the charge neutrality of local defects.
A demographic study of the exponential distribution applied to uneven-aged forests
Jeffrey H. Gove
2016-01-01
A demographic approach based on a size-structured version of the McKendrick-Von Foerster equation is used to demonstrate a theoretical link between the population size distribution and the underlying vital rates (recruitment, mortality and diameter growth) for the population of individuals whose diameter distribution is negative exponential. This model supports the...
Exponential Potential versus Dark Matter
1993-10-15
scale of the solar system. Galaxy, Dark matter , Galaxy cluster, Gravitation, Quantum gravity...A two parameter exponential potential explains the anomalous kinematics of galaxies and galaxy clusters without need for the myriad ad hoc dark ... matter models currently in vogue. It also explains much about the scales and structures of galaxies and galaxy clusters while being quite negligible on the
ERIC Educational Resources Information Center
Wood, Justin N.; Wood, Samantha M. W.
2018-01-01
How do newborns learn to recognize objects? According to temporal learning models in computational neuroscience, the brain constructs object representations by extracting smoothly changing features from the environment. To date, however, it is unknown whether newborns depend on smoothly changing features to build invariant object representations.…
An earthquake rate forecast for Europe based on smoothed seismicity and smoothed fault contribution
NASA Astrophysics Data System (ADS)
Hiemer, Stefan; Woessner, Jochen; Basili, Roberto; Wiemer, Stefan
2013-04-01
The main objective of project SHARE (Seismic Hazard Harmonization in Europe) is to develop a community-based seismic hazard model for the Euro-Mediterranean region. The logic tree of earthquake rupture forecasts comprises several methodologies including smoothed seismicity approaches. Smoothed seismicity thus represents an alternative concept to express the degree of spatial stationarity of seismicity and provides results that are more objective, reproducible, and testable. Nonetheless, the smoothed-seismicity approach suffers from the common drawback of being generally based on earthquake catalogs alone, i.e. the wealth of knowledge from geology is completely ignored. We present a model that applies the kernel-smoothing method to both past earthquake locations and slip rates on mapped crustal faults and subductions. The result is mainly driven by the data, being independent of subjective delineation of seismic source zones. The core parts of our model are two distinct location probability densities: The first is computed by smoothing past seismicity (using variable kernel smoothing to account for varying data density). The second is obtained by smoothing fault moment rate contributions. The fault moment rates are calculated by summing the moment rate of each fault patch on a fully parameterized and discretized fault as available from the SHARE fault database. We assume that the regional frequency-magnitude distribution of the entire study area is well known and estimate the a- and b-value of a truncated Gutenberg-Richter magnitude distribution based on a maximum likelihood approach that considers the spatial and temporal completeness history of the seismic catalog. The two location probability densities are linearly weighted as a function of magnitude assuming that (1) the occurrence of past seismicity is a good proxy to forecast occurrence of future seismicity and (2) future large-magnitude events occur more likely in the vicinity of known faults. Consequently, the underlying location density of our model depends on the magnitude. We scale the density with the estimated a-value in order to construct a forecast that specifies the earthquake rate in each longitude-latitude-magnitude bin. The model is intended to be one branch of SHARE's logic tree of rupture forecasts and provides rates of events in the magnitude range of 5 <= m <= 8.5 for the entire region of interest and is suitable for comparison with other long-term models in the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP).
Shrub growth response to climate across the North Slope of Alaska
NASA Astrophysics Data System (ADS)
Ackerman, D.; Griffin, D.; Finlay, J. C.; Hobbie, S. E.
2016-12-01
Warmer temperatures at high latitudes are driving the expansion of woody shrubs in arctic tundra, yielding feedbacks to regional carbon cycling. Accounting for these feedbacks in global climate models will require accurate predictions of the spatial extent of shrub expansion within arctic tundra. While dendroecological approaches have proven useful in understanding how shrubs respond to climate, empirical studies to date are limited in spatial extent, often to just one or two sites within a landscape. A recent meta-analysis of such dendroecological studies hypothesizes that soil moisture is a key variable in determining climate sensitivity of arctic shrub growth. We present the first regional-scale empirical test of this hypothesis by analyzing inter-annual radial growth of deciduous shrubs across soil moisture gradients throughout the North Slope of Alaska. Contrary to expectation, riparian shrubs in high-moisture environments showed no climate sensitivity, while shrubs growing in drier upland sites showed a strong positive growth response to summer temperature. These results proved robust to a variety of detrending functions ranging from conservative (negative exponential) to data adaptive (20-year cubic smoothing spline). These findings call into question the role of soil moisture in determining the climate sensitivity of arctic shrubs and further highlight the importance of unified, regional-scale sampling strategies in understanding climate-vegetation links.
Mechanisms of fluid production in smooth adhesive pads of insects
Dirks, Jan-Henning; Federle, Walter
2011-01-01
Insect adhesion is mediated by thin fluid films secreted into the contact zone. As the amount of fluid affects adhesive forces, a control of secretion appears probable. Here, we quantify for the first time the rate of fluid secretion in adhesive pads of cockroaches and stick insects. The volume of footprints deposited during consecutive press-downs decreased exponentially and approached a non-zero steady state, demonstrating the presence of a storage volume. We estimated its size and the influx rate into it from a simple compartmental model. Influx was independent of step frequency. Fluid-depleted pads recovered maximal footprint volumes within 15 min. Pads in stationary contact accumulated fluid along the perimeter of the contact zone. The initial fluid build-up slowed down, suggesting that flow is driven by negative Laplace pressure. Freely climbing stick insects left hardly any traceable footprints, suggesting that they save secretion by minimizing contact area or by recovering fluid during detachment. However, even the highest fluid production rates observed incur only small biosynthesis costs, representing less than 1 per cent of the resting metabolic rate. Our results show that fluid secretion in insect wet adhesive systems relies on simple physical principles, allowing for passive control of fluid volume within the contact zone. PMID:21208970
Automated time series forecasting for biosurveillance.
Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit
2007-09-30
For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.
Human population and atmospheric carbon dioxide growth dynamics: Diagnostics for the future
NASA Astrophysics Data System (ADS)
Hüsler, A. D.; Sornette, D.
2014-10-01
We analyze the growth rates of human population and of atmospheric carbon dioxide by comparing the relative merits of two benchmark models, the exponential law and the finite-time-singular (FTS) power law. The later results from positive feedbacks, either direct or mediated by other dynamical variables, as shown in our presentation of a simple endogenous macroeconomic dynamical growth model describing the growth dynamics of coupled processes involving human population (labor in economic terms), capital and technology (proxies by CO2 emissions). Human population in the context of our energy intensive economies constitutes arguably the most important underlying driving variable of the content of carbon dioxide in the atmosphere. Using some of the best databases available, we perform empirical analyses confirming that the human population on Earth has been growing super-exponentially until the mid-1960s, followed by a decelerated sub-exponential growth, with a tendency to plateau at just an exponential growth in the last decade with an average growth rate of 1.0% per year. In contrast, we find that the content of carbon dioxide in the atmosphere has continued to accelerate super-exponentially until 1990, with a transition to a progressive deceleration since then, with an average growth rate of approximately 2% per year in the last decade. To go back to CO2 atmosphere contents equal to or smaller than the level of 1990 as has been the broadly advertised goals of international treaties since 1990 requires herculean changes: from a dynamical point of view, the approximately exponential growth must not only turn to negative acceleration but also negative velocity to reverse the trend.
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
Tosun, İsmail
2012-01-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R2) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients. PMID:22690177
Tosun, Ismail
2012-03-01
The adsorption isotherm, the adsorption kinetics, and the thermodynamic parameters of ammonium removal from aqueous solution by using clinoptilolite in aqueous solution was investigated in this study. Experimental data obtained from batch equilibrium tests have been analyzed by four two-parameter (Freundlich, Langmuir, Tempkin and Dubinin-Radushkevich (D-R)) and four three-parameter (Redlich-Peterson (R-P), Sips, Toth and Khan) isotherm models. D-R and R-P isotherms were the models that best fitted to experimental data over the other two- and three-parameter models applied. The adsorption energy (E) from the D-R isotherm was found to be approximately 7 kJ/mol for the ammonium-clinoptilolite system, thereby indicating that ammonium is adsorbed on clinoptilolite by physisorption. Kinetic parameters were determined by analyzing the nth-order kinetic model, the modified second-order model and the double exponential model, and each model resulted in a coefficient of determination (R(2)) of above 0.989 with an average relative error lower than 5%. A Double Exponential Model (DEM) showed that the adsorption process develops in two stages as rapid and slow phase. Changes in standard free energy (∆G°), enthalpy (∆H°) and entropy (∆S°) of ammonium-clinoptilolite system were estimated by using the thermodynamic equilibrium coefficients.
Sorption isotherm characteristics of aonla flakes.
Alam, Md Shafiq; Singh, Amarjit
2011-06-01
The equilibrium moisture content was determined for un-osmosed and osmosed (salt osmosed and sugar osmosed) aonla flakes using the static method at temperatures of 25, 40,50, 60 and 70 °C over a range of relative humidities from 20 to 90%. The sorption capacity of aonla decreased with an increase in temperature at constant water activity. The sorption isotherms exhibited hysteresis, in which the equilibrium moisture content was higher at a particular equilibrium relative humidity for desorption curve than for adsorption. The hysteresis effect was more pertinent for un-osmosed and salt osmosed samples in comparison to sugar osmosed samples. Five models namely the modified Chung Pfost, modified Halsey, modified Henderson, modified Exponential and Guggenheim-Anderson-de Boer (GAB) were evaluated to determine the best fit for the experimental data. For both adsorption and desorption process of aonla fruit, the equilibrium moisture content of un-osmosed and osmosed aonla samples can be predicted well by GAB model as well as modified Exponential model. Moreover, the modified Exponential model was found to be the best for describing the sorption behaviour of un-osmosed and salt osmosed samples while, GAB model for sugar osmosed aonla samples.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
NASA Technical Reports Server (NTRS)
Petot, D.; Loiseau, H.
1982-01-01
Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.
Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...
Relaxation dynamics of multilayer triangular Husimi cacti
NASA Astrophysics Data System (ADS)
Galiceanu, Mircea; Jurjiu, Aurel
2016-09-01
We focus on the relaxation dynamics of multilayer polymer structures having, as underlying topology, the Husimi cactus. The relaxation dynamics of the multilayer structures is investigated in the framework of generalized Gaussian structures model using both Rouse and Zimm approaches. In the Rouse type-approach, we determine analytically the complete eigenvalues spectrum and based on it we calculate the mechanical relaxation moduli (storage and loss modulus) and the average monomer displacement. First, we monitor these physical quantities for structures with a fixed generation number and we increase the number of layers, such that the linear topology will smoothly come into play. Second, we keep constant the size of the structures, varying simultaneously two parameters: the generation number of the main layer, G, and the number of layers, c. This fact allows us to study in detail the crossover from a pure Husimi cactus behavior to a predominately linear chain behavior. The most interesting situation is found when the two limiting topologies cancel each other. For this case, we encounter in the intermediate frequency/time domain regions of constant slope for different values of the parameter set (G, c) and we show that the number of layers follows an exponential-law of G. In the Zimm-type approach, which includes the hydrodynamic interactions, the quantities that describe the mechanical relaxation dynamics do not show scaling behavior as in the Rouse model, except the limiting case, namely, a very high number of layers and low generation number.
Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization
NASA Astrophysics Data System (ADS)
Liu, Chuanming; Yao, Huajian
2017-03-01
Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.
Yang, Shiju; Li, Chuandong; Huang, Tingwen
2016-03-01
The problem of exponential stabilization and synchronization for fuzzy model of memristive neural networks (MNNs) is investigated by using periodically intermittent control in this paper. Based on the knowledge of memristor and recurrent neural network, the model of MNNs is formulated. Some novel and useful stabilization criteria and synchronization conditions are then derived by using the Lyapunov functional and differential inequality techniques. It is worth noting that the methods used in this paper are also applied to fuzzy model for complex networks and general neural networks. Numerical simulations are also provided to verify the effectiveness of theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
Erik A. Lilleskov
2017-01-01
Fungal respiration contributes substantially to ecosystem respiration, yet its field temperature response is poorly characterized. I hypothesized that at diurnal time scales, temperature-respiration relationships would be better described by unimodal than exponential models, and at longer time scales both Q10 and mass-specific respiration at 10 °...
NASA Astrophysics Data System (ADS)
Wen, Zhang; Zhan, Hongbin; Wang, Quanrong; Liang, Xing; Ma, Teng; Chen, Chen
2017-05-01
Actual field pumping tests often involve variable pumping rates which cannot be handled by the classical constant-rate or constant-head test models, and often require a convolution process to interpret the test data. In this study, we proposed a semi-analytical model considering an exponentially decreasing pumping rate started at a certain (higher) rate and eventually stabilized at a certain (lower) rate for cases with or without wellbore storage. A striking new feature of the pumping test with an exponentially decayed rate is that the drawdowns will decrease over a certain period of time during intermediate pumping stage, which has never been seen before in constant-rate or constant-head pumping tests. It was found that the drawdown-time curve associated with an exponentially decayed pumping rate function was bounded by two asymptotic curves of the constant-rate tests with rates equaling to the starting and stabilizing rates, respectively. The wellbore storage must be considered for a pumping test without an observation well (single-well test). Based on such characteristics of the time-drawdown curve, we developed a new method to estimate the aquifer parameters by using the genetic algorithm.
Porto, Markus; Roman, H Eduardo
2002-04-01
We consider autoregressive conditional heteroskedasticity (ARCH) processes in which the variance sigma(2)(y) depends linearly on the absolute value of the random variable y as sigma(2)(y) = a+b absolute value of y. While for the standard model, where sigma(2)(y) = a + b y(2), the corresponding probability distribution function (PDF) P(y) decays as a power law for absolute value of y-->infinity, in the linear case it decays exponentially as P(y) approximately exp(-alpha absolute value of y), with alpha = 2/b. We extend these results to the more general case sigma(2)(y) = a+b absolute value of y(q), with 0 < q < 2. We find stretched exponential decay for 1 < q < 2 and stretched Gaussian behavior for 0 < q < 1. As an application, we consider the case q=1 as our starting scheme for modeling the PDF of daily (logarithmic) variations in the Dow Jones stock market index. When the history of the ARCH process is taken into account, the resulting PDF becomes a stretched exponential even for q = 1, with a stretched exponent beta = 2/3, in a much better agreement with the empirical data.
Statistical modeling of storm-level Kp occurrences
Remick, K.J.; Love, J.J.
2006-01-01
We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.
NASA Technical Reports Server (NTRS)
Lindner, Bernhard Lee; Ackerman, Thomas P.; Pollack, James B.
1990-01-01
CO2 comprises 95 pct. of the composition of the Martian atmosphere. However, the Martian atmosphere also has a high aerosol content. Dust particles vary from less than 0.2 to greater than 3.0. CO2 is an active absorber and emitter in near IR and IR wavelengths; the near IR absorption bands of CO2 provide significant heating of the atmosphere, and the 15 micron band provides rapid cooling. Including both CO2 and aerosol radiative transfer simultaneously in a model is difficult. Aerosol radiative transfer requires a multiple scattering code, while CO2 radiative transfer must deal with complex wavelength structure. As an alternative to the pure atmosphere treatment in most models which causes inaccuracies, a treatment was developed called the exponential sum or k distribution approximation. The chief advantage of the exponential sum approach is that the integration over k space of f(k) can be computed more quickly than the integration of k sub upsilon over frequency. The exponential sum approach is superior to the photon path distribution and emissivity techniques for dusty conditions. This study was the first application of the exponential sum approach to Martian conditions.
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
NASA Astrophysics Data System (ADS)
Murru, M.; Falcone, G.; Taroni, M.; Console, R.
2017-12-01
In 2015 the Italian Department of Civil Protection, started a project for upgrading the official Italian seismic hazard map (MPS04) inviting the Italian scientific community to participate in a joint effort for its realization. We participated providing spatially variable time-independent (Poisson) long-term annual occurrence rates of seismic events on the entire Italian territory, considering cells of 0.1°x0.1° from M4.5 up to M8.1 for magnitude bin of 0.1 units. Our final model was composed by two different models, merged in one ensemble model, each one with the same weight: the first one was realized by a smoothed seismicity approach, the second one using the seismogenic faults. The spatial smoothed seismicity was obtained using the smoothing method introduced by Frankel (1995) applied to the historical and instrumental seismicity. In this approach we adopted a tapered Gutenberg-Richter relation with a b-value fixed to 1 and a corner magnitude estimated with the bigger events in the catalogs. For each seismogenic fault provided by the Database of the Individual Seismogenic Sources (DISS), we computed the annual rate (for each cells of 0.1°x0.1°) for magnitude bin of 0.1 units, assuming that the seismic moments of the earthquakes generated by each fault are distributed according to the same tapered Gutenberg-Richter relation of the smoothed seismicity model. The annual rate for the final model was determined in the following way: if the cell falls within one of the seismic sources, we merge the respective value of rate determined by the seismic moments of the earthquakes generated by each fault and the value of the smoothed seismicity model with the same weight; if instead the cells fall outside of any seismic source we considered the rate obtained from the spatial smoothed seismicity. Here we present the final results of our study to be used for the new Italian seismic hazard map.
Kennedy, Kristen M.; Rodrigue, Karen M.; Lindenberger, Ulman; Raz, Naftali
2010-01-01
The effects of advanced age and cognitive resources on the course of skill acquisition are unclear, and discrepancies among studies may reflect limitations of data analytic approaches. We applied a multilevel negative exponential model to skill acquisition data from 80 trials (four 20-trial blocks) of a pursuit rotor task administered to healthy adults (19–80 years old). The analyses conducted at the single-trial level indicated that the negative exponential function described performance well. Learning parameters correlated with measures of task-relevant cognitive resources on all blocks except the last and with age on all blocks after the second. Thus, age differences in motor skill acquisition may evolve in 2 phases: In the first, age differences are collinear with individual differences in task-relevant cognitive resources; in the second, age differences orthogonal to these resources emerge. PMID:20047985
Using phenomenological models for forecasting the 2015 Ebola challenge.
Pell, Bruce; Kuang, Yang; Viboud, Cecile; Chowell, Gerardo
2018-03-01
The rising number of novel pathogens threatening the human population has motivated the application of mathematical modeling for forecasting the trajectory and size of epidemics. We summarize the real-time forecasting results of the logistic equation during the 2015 Ebola challenge focused on predicting synthetic data derived from a detailed individual-based model of Ebola transmission dynamics and control. We also carry out a post-challenge comparison of two simple phenomenological models. In particular, we systematically compare the logistic growth model and a recently introduced generalized Richards model (GRM) that captures a range of early epidemic growth profiles ranging from sub-exponential to exponential growth. Specifically, we assess the performance of each model for estimating the reproduction number, generate short-term forecasts of the epidemic trajectory, and predict the final epidemic size. During the challenge the logistic equation consistently underestimated the final epidemic size, peak timing and the number of cases at peak timing with an average mean absolute percentage error (MAPE) of 0.49, 0.36 and 0.40, respectively. Post-challenge, the GRM which has the flexibility to reproduce a range of epidemic growth profiles ranging from early sub-exponential to exponential growth dynamics outperformed the logistic growth model in ascertaining the final epidemic size as more incidence data was made available, while the logistic model underestimated the final epidemic even with an increasing amount of data of the evolving epidemic. Incidence forecasts provided by the generalized Richards model performed better across all scenarios and time points than the logistic growth model with mean RMS decreasing from 78.00 (logistic) to 60.80 (GRM). Both models provided reasonable predictions of the effective reproduction number, but the GRM slightly outperformed the logistic growth model with a MAPE of 0.08 compared to 0.10, averaged across all scenarios and time points. Our findings further support the consideration of transmission models that incorporate flexible early epidemic growth profiles in the forecasting toolkit. Such models are particularly useful for quickly evaluating a developing infectious disease outbreak using only case incidence time series of the early phase of an infectious disease outbreak. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Cogley, A. C.; Borucki, W. J.
1976-01-01
When incorporating formulations of instantaneous solar heating or photolytic rates as functions of altitude and sun angle into long range forecasting models, it may be desirable to replace the time integrals by daily average rates that are simple functions of latitude and season. This replacement is accomplished by approximating the integral over the solar day by a pure exponential. This gives a daily average rate as a multiplication factor times the instantaneous rate evaluated at an appropriate sun angle. The accuracy of the exponential approximation is investigated by a sample calculation using an instantaneous ozone heating formulation available in the literature.
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
Déjardin, P
2013-08-30
The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamimura, Atsushi; Kaneko, Kunihiko
2018-03-01
Explanation of exponential growth in self-reproduction is an important step toward elucidation of the origins of life because optimization of the growth potential across rounds of selection is necessary for Darwinian evolution. To produce another copy with approximately the same composition, the exponential growth rates for all components have to be equal. How such balanced growth is achieved, however, is not a trivial question, because this kind of growth requires orchestrated replication of the components in stochastic and nonlinear catalytic reactions. By considering a mutually catalyzing reaction in two- and three-dimensional lattices, as represented by a cellular automaton model, we show that self-reproduction with exponential growth is possible only when the replication and degradation of one molecular species is much slower than those of the others, i.e., when there is a minority molecule. Here, the synergetic effect of molecular discreteness and crowding is necessary to produce the exponential growth. Otherwise, the growth curves show superexponential growth because of nonlinearity of the catalytic reactions or subexponential growth due to replication inhibition by overcrowding of molecules. Our study emphasizes that the minority molecular species in a catalytic reaction network is necessary for exponential growth at the primitive stage of life.
Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.
2001-01-01
The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.
NASA Astrophysics Data System (ADS)
Grobbelaar-Van Dalsen, Marié
2015-08-01
This article is a continuation of our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) on the polynomial stabilization of a linear model for the magnetoelastic interactions in a two-dimensional electrically conducting Mindlin-Timoshenko plate. We introduce nonlinear damping that is effective only in a small portion of the interior of the plate. It turns out that the model is uniformly exponentially stable when the function , that represents the locally distributed damping, behaves linearly near the origin. However, the use of Mindlin-Timoshenko plate theory in the model enforces a restriction on the region occupied by the plate.
Estimating piecewise exponential frailty model with changing prior for baseline hazard function
NASA Astrophysics Data System (ADS)
Thamrin, Sri Astuti; Lawi, Armin
2016-02-01
Piecewise exponential models provide a very flexible framework for modelling univariate survival data. It can be used to estimate the effects of different covariates which are influenced by the survival data. Although in a strict sense it is a parametric model, a piecewise exponential hazard can approximate any shape of a parametric baseline hazard. In the parametric baseline hazard, the hazard function for each individual may depend on a set of risk factors or explanatory variables. However, it usually does not explain all such variables which are known or measurable, and these variables become interesting to be considered. This unknown and unobservable risk factor of the hazard function is often termed as the individual's heterogeneity or frailty. This paper analyses the effects of unobserved population heterogeneity in patients' survival times. The issue of model choice through variable selection is also considered. A sensitivity analysis is conducted to assess the influence of the prior for each parameter. We used the Markov Chain Monte Carlo method in computing the Bayesian estimator on kidney infection data. The results obtained show that the sex and frailty are substantially associated with survival in this study and the models are relatively quite sensitive to the choice of two different priors.
The Impact of Accelerating Faster than Exponential Population Growth on Genetic Variation
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-01-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models’ effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times. PMID:24381333
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yao, Weiping; Yang, Chaohui; Jing, Jiliang
2018-05-01
From the viewpoint of holography, we study the behaviors of the entanglement entropy in insulator/superconductor transition with exponential nonlinear electrodynamics (ENE). We find that the entanglement entropy is a good probe to the properties of the holographic phase transition. Both in the half space and the belt space, the non-monotonic behavior of the entanglement entropy in superconducting phase versus the chemical potential is general in this model. Furthermore, the behavior of the entanglement entropy for the strip geometry shows that the confinement/deconfinement phase transition appears in both insulator and superconductor phases. And the critical width of the confinement/deconfinement phase transition depends on the chemical potential and the exponential coupling term. More interestingly, the behaviors of the entanglement entropy in their corresponding insulator phases are independent of the exponential coupling factor but depends on the width of the subsystem A.
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images
NASA Astrophysics Data System (ADS)
Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.
2017-05-01
We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at a resolution of 1.3 km per pixel (six times Ceres average). The albedo of fresh, bright material seen inside this area in the highest resolution images (35 m per pixel) is probably around unity. Cerealia Facula has an unusually steep phase function, which may be due to unresolved topography, high surface roughness, or large average particle size. It has a strongly red spectrum whereas the neighboring, less-bright, Vinalia Faculae are neutral in color. We find no evidence for a diurnal ground fog-type haze in Occator as described by Nathues et al. (2015). We can neither reproduce their findings using the same images, nor confirm them using higher resolution images. FC images have not yet offered direct evidence for present sublimation in Occator.
Cost Effective Persistent Regional Surveillance with Reconfigurable Satellite Constellations
2015-04-24
region where both models show the most agreement and therefore the blended curves (in the bottom plot) are fairly smooth. Additionally, a learning ...payload cost Cpay. Cpay = 38000D1.6 + 60615D2.67 ($k FY2010) (11) Satellite cost is modeled by blending the output from the Small Satellite Cost Model...SSCM was used for Md ≤ 400kg and the USCM8 cost model was used for Md ≥ 200kg, and linear blending was used to smooth out the transition between models
NASA Technical Reports Server (NTRS)
Stenholm, Stig
1993-01-01
A single mode cavity is deformed smoothly to change its electromagnetic eigenfrequency. The system is modeled as a simple harmonic oscillator with a varying period. The Wigner function of the problem is obtained exactly by starting with a squeezed initial state. The result is evaluated for a linear change of the cavity length. The approach to the adiabatic limit is investigated. The maximum squeezing is found to occur for smooth change lasting only a fraction of the oscillational period. However, only a factor of two improvement over the adiabatic result proves to be possible. The sudden limit cannot be investigated meaningfully within the model.
Likelihood testing of seismicity-based rate forecasts of induced earthquakes in Oklahoma and Kansas
Moschetti, Morgan P.; Hoover, Susan M.; Mueller, Charles
2016-01-01
Likelihood testing of induced earthquakes in Oklahoma and Kansas has identified the parameters that optimize the forecasting ability of smoothed seismicity models and quantified the recent temporal stability of the spatial seismicity patterns. Use of the most recent 1-year period of earthquake data and use of 10–20-km smoothing distances produced the greatest likelihood. The likelihood that the locations of January–June 2015 earthquakes were consistent with optimized forecasts decayed with increasing elapsed time between the catalogs used for model development and testing. Likelihood tests with two additional sets of earthquakes from 2014 exhibit a strong sensitivity of the rate of decay to the smoothing distance. Marked reductions in likelihood are caused by the nonstationarity of the induced earthquake locations. Our results indicate a multiple-fold benefit from smoothed seismicity models in developing short-term earthquake rate forecasts for induced earthquakes in Oklahoma and Kansas, relative to the use of seismic source zones.
Mazaheri, Davood; Shojaosadati, Seyed Abbas; Zamir, Seyed Morteza; Mousavi, Seyyed Mohammad
2018-04-21
In this work, mathematical modeling of ethanol production in solid-state fermentation (SSF) has been done based on the variation in the dry weight of solid medium. This method was previously used for mathematical modeling of enzyme production; however, the model should be modified to predict the production of a volatile compound like ethanol. The experimental results of bioethanol production from the mixture of carob pods and wheat bran by Zymomonas mobilis in SSF were used for the model validation. Exponential and logistic kinetic models were used for modeling the growth of microorganism. In both cases, the model predictions matched well with the experimental results during the exponential growth phase, indicating the good ability of solid medium weight variation method for modeling a volatile product formation in solid-state fermentation. In addition, using logistic model, better predictions were obtained.
Exponential integration algorithms applied to viscoplasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Walker, Kevin P.
1991-01-01
Four, linear, exponential, integration algorithms (two implicit, one explicit, and one predictor/corrector) are applied to a viscoplastic model to assess their capabilities. Viscoplasticity comprises a system of coupled, nonlinear, stiff, first order, ordinary differential equations which are a challenge to integrate by any means. Two of the algorithms (the predictor/corrector and one of the implicits) give outstanding results, even for very large time steps.
NASA Astrophysics Data System (ADS)
Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric
2014-08-01
In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.
Smooth random change point models.
van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E
2011-03-15
Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.
Adaptive exponential integrate-and-fire model as an effective description of neuronal activity.
Brette, Romain; Gerstner, Wulfram
2005-11-01
We introduce a two-dimensional integrate-and-fire model that combines an exponential spike mechanism with an adaptation equation, based on recent theoretical findings. We describe a systematic method to estimate its parameters with simple electrophysiological protocols (current-clamp injection of pulses and ramps) and apply it to a detailed conductance-based model of a regular spiking neuron. Our simple model predicts correctly the timing of 96% of the spikes (+/-2 ms) of the detailed model in response to injection of noisy synaptic conductances. The model is especially reliable in high-conductance states, typical of cortical activity in vivo, in which intrinsic conductances were found to have a reduced role in shaping spike trains. These results are promising because this simple model has enough expressive power to reproduce qualitatively several electrophysiological classes described in vitro.
Vortex equations: Singularities, numerical solution, and axisymmetric vortex breakdown
NASA Technical Reports Server (NTRS)
Bossel, H. H.
1972-01-01
A method of weighted residuals for the computation of rotationally symmetric quasi-cylindrical viscous incompressible vortex flow is presented and used to compute a wide variety of vortex flows. The method approximates the axial velocity and circulation profiles by series of exponentials having (N + 1) and N free parameters, respectively. Formal integration results in a set of (2N + 1) ordinary differential equations for the free parameters. The governing equations are shown to have an infinite number of discrete singularities corresponding to critical values of the swirl parameters. The computations point to the controlling influence of the inner core flow on vortex behavior. They also confirm the existence of two particular critical swirl parameter values: one separates vortex flow which decays smoothly from vortex flow which eventually breaks down, and the second is the first singularity of the quasi-cylindrical system, at which point physical vortex breakdown is thought to occur.
Shock wave refraction enhancing conditions on an extended interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markhotok, A.; Popovic, S.
2013-04-15
We determined the law of shock wave refraction for a class of extended interfaces with continuously variable gradients. When the interface is extended or when the gas parameters vary fast enough, the interface cannot be considered as sharp or smooth and the existing calculation methods cannot be applied. The expressions we derived are general enough to cover all three types of the interface and are valid for any law of continuously varying parameters. We apply the equations to the case of exponentially increasing temperature on the boundary and compare the results for all three types of interfaces. We have demonstratedmore » that the type of interface can increase or inhibit the shock wave refraction. Our findings can be helpful in understanding the results obtained in energy deposition experiments as well as for controlling the shock-plasma interaction in other settings.« less
Geographic analysis of road accident severity index in Nigeria.
Iyanda, Ayodeji E
2018-05-28
Before 2030, deaths from road traffic accidents (RTAs) will surpass cerebrovascular disease, tuberculosis, and HIV/AIDS. Yet, there is little knowledge on the geographic distribution of RTA severity in Nigeria. Accident Severity Index is the proportion of deaths that result from a road accident. This study analysed the geographic pattern of RTA severity based on the data retrieved from Federal Road Safety Corps (FRSC). The study predicted a two-year data from a historic road accident data using exponential smoothing technique. To determine spatial autocorrelation, global and local indicators of spatial association were implemented in a geographic information system. Results show significant clusters of high RTA severity among states in the northeast and the northwest of Nigeria. Hence, the findings are discussed from two perspectives: Road traffic law compliance and poor emergency response. Conclusion, the severity of RTA is high in the northern states of Nigeria, hence, RTA remains a public health concern.
Local Stable and Unstable Manifolds and Their Control in Nonautonomous Finite-Time Flows
NASA Astrophysics Data System (ADS)
Balasuriya, Sanjeeva
2016-08-01
It is well known that stable and unstable manifolds strongly influence fluid motion in unsteady flows. These emanate from hyperbolic trajectories, with the structures moving nonautonomously in time. The local directions of emanation at each instance in time is the focus of this article. Within a nearly autonomous setting, it is shown that these time-varying directions can be characterised through the accumulated effect of velocity shear. Connections to Oseledets spaces and projection operators in exponential dichotomies are established. Availability of data for both infinite- and finite-time intervals is considered. With microfluidic flow control in mind, a methodology for manipulating these directions in any prescribed time-varying fashion by applying a local velocity shear is developed. The results are verified for both smoothly and discontinuously time-varying directions using finite-time Lyapunov exponent fields, and excellent agreement is obtained.
Short-term leprosy forecasting from an expert opinion survey.
Deiner, Michael S; Worden, Lee; Rittel, Alex; Ackley, Sarah F; Liu, Fengchen; Blum, Laura; Scott, James C; Lietman, Thomas M; Porco, Travis C
2017-01-01
We conducted an expert survey of leprosy (Hansen's Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health.
Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2
NASA Technical Reports Server (NTRS)
Lorenzini, Enrico C.
1987-01-01
A control law was developed to control the elevator during short-distance maneuvers along the tether of a 4-mass tethered system. This control law (called retarded exponential or RE) was analyzed parametrically in order to assess which control parameters provide a good dynamic response and a smooth time history of the acceleration on board the elevator. The short-distance maneuver under investigation consists of a slow crawling of the elevator over the distance of 10 m that represents a typical maneuver for fine tuning the acceleration level on board the elevator. The contribution of aerodynamic and thermal perturbations upon acceleration levels was also evaluated and acceleration levels obtained when such pertubations are taken into account were compared to those obtained by neglecting the thermal and aerodynamic forces. In addition, the preparation of a tether simulation questionnaire is illustrated. Analytic solutions to be compared to numerical cases and simulator test cases are also discussed.
At a crossroads: reentry challenges and healthcare needs among homeless female ex-offenders.
Salem, Benissa E; Nyamathi, Adeline; Idemundia, Faith; Slaughter, Regina; Ames, Masha
2013-01-01
The exponential increase in the number of women parolees and probationers in the last decade has made women the most rapidly growing group of offenders in the United States. The purpose of this descriptive, qualitative study is to understand the unique gendered experiences of homeless female ex-offenders, in the context of healthcare needs, types of health services sought, and gaps in order to help them achieve a smooth transition post prison release. Focus group qualitative methodology was utilized to engage 14 female ex-offenders enrolled in a residential drug treatment program in Southern California. The findings suggested that for homeless female ex-offenders, there are a myriad of healthcare challenges, knowledge deficits, and barriers to moving forward in life, which necessitates strategies to prevent relapse. These findings support the development of gender-sensitive programs for preventing or reducing drug and alcohol use, recidivism, and sexually transmitted infections among this hard-to-reach population.
Short-term leprosy forecasting from an expert opinion survey
Deiner, Michael S.; Worden, Lee; Rittel, Alex; Ackley, Sarah F.; Liu, Fengchen; Blum, Laura; Scott, James C.; Lietman, Thomas M.
2017-01-01
We conducted an expert survey of leprosy (Hansen’s Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health. PMID:28813531
High Frequency Acoustic Reflection and Transmission in Ocean Sediments
2005-09-30
the magnitude and phase of the reflection coefficient from a smooth water/sand interface with elastic and poroelastic models ”, J. Acoust . Soc. Am...physical model of high-frequency acoustic interaction with the ocean floor, including penetration through and reflection from smooth and rough water...and additional laboratory measurements in the ARL:UT sand tank, an improved model of sediment acoustics will be developed that is consistent with
ERIC Educational Resources Information Center
Ferrando, Pere J.
2004-01-01
This study used kernel-smoothing procedures to estimate the item characteristic functions (ICFs) of a set of continuous personality items. The nonparametric ICFs were compared with the ICFs estimated (a) by the linear model and (b) by Samejima's continuous-response model. The study was based on a conditioned approach and used an error-in-variables…
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Exponentiated power Lindley distribution.
Ashour, Samir K; Eltehiwy, Mahmoud A
2015-11-01
A new generalization of the Lindley distribution is recently proposed by Ghitany et al. [1], called as the power Lindley distribution. Another generalization of the Lindley distribution was introduced by Nadarajah et al. [2], named as the generalized Lindley distribution. This paper proposes a more generalization of the Lindley distribution which generalizes the two. We refer to this new generalization as the exponentiated power Lindley distribution. The new distribution is important since it contains as special sub-models some widely well-known distributions in addition to the above two models, such as the Lindley distribution among many others. It also provides more flexibility to analyze complex real data sets. We study some statistical properties for the new distribution. We discuss maximum likelihood estimation of the distribution parameters. Least square estimation is used to evaluate the parameters. Three algorithms are proposed for generating random data from the proposed distribution. An application of the model to a real data set is analyzed using the new distribution, which shows that the exponentiated power Lindley distribution can be used quite effectively in analyzing real lifetime data.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation
NASA Astrophysics Data System (ADS)
Hu, J.; Lu, L.; Xu, J.; Zhang, J.
2017-09-01
For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.
Analysis of mixed traffic flow with human-driving and autonomous cars based on car-following model
NASA Astrophysics Data System (ADS)
Zhu, Wen-Xing; Zhang, H. M.
2018-04-01
We investigated the mixed traffic flow with human-driving and autonomous cars. A new mathematical model with adjustable sensitivity and smooth factor was proposed to describe the autonomous car's moving behavior in which smooth factor is used to balance the front and back headway in a flow. A lemma and a theorem were proved to support the stability criteria in traffic flow. A series of simulations were carried out to analyze the mixed traffic flow. The fundamental diagrams were obtained from the numerical simulation results. The varying sensitivity and smooth factor of autonomous cars affect traffic flux, which exhibits opposite varying tendency with increasing parameters before and after the critical density. Moreover, the sensitivity of sensors and smooth factors play an important role in stabilizing the mixed traffic flow and suppressing the traffic jam.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
Kinetic and Stochastic Models of 1D yeast ``prions"
NASA Astrophysics Data System (ADS)
Kunes, Kay
2005-03-01
Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.
Pendulum Mass Affects the Measurement of Articular Friction Coefficient
Akelman, Matthew R.; Teeple, Erin; Machan, Jason T.; Crisco, Joseph J.; Jay, Gregory D.; Fleming, Braden C.
2012-01-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton’s equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton’s model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n = 4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton’s equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. PMID:23122223
Pendulum mass affects the measurement of articular friction coefficient.
Akelman, Matthew R; Teeple, Erin; Machan, Jason T; Crisco, Joseph J; Jay, Gregory D; Fleming, Braden C
2013-02-01
Friction measurements of articular cartilage are important to determine the relative tribologic contributions made by synovial fluid or cartilage, and to assess the efficacy of therapies for preventing the development of post-traumatic osteoarthritis. Stanton's equation is the most frequently used formula for estimating the whole joint friction coefficient (μ) of an articular pendulum, and assumes pendulum energy loss through a mass-independent mechanism. This study examines if articular pendulum energy loss is indeed mass independent, and compares Stanton's model to an alternative model, which incorporates viscous damping, for calculating μ. Ten loads (25-100% body weight) were applied in a random order to an articular pendulum using the knees of adult male Hartley guinea pigs (n=4) as the fulcrum. Motion of the decaying pendulum was recorded and μ was estimated using two models: Stanton's equation, and an exponential decay function incorporating a viscous damping coefficient. μ estimates decreased as mass increased for both models. Exponential decay model fit error values were 82% less than the Stanton model. These results indicate that μ decreases with increasing mass, and that an exponential decay model provides a better fit for articular pendulum data at all mass values. In conclusion, inter-study comparisons of articular pendulum μ values should not be made without recognizing the loads used, as μ values are mass dependent. Copyright © 2012 Elsevier Ltd. All rights reserved.
Reference respiratory waveforms by minimum jerk model analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anetai, Yusuke, E-mail: anetai@radonc.med.osaka-u.ac.jp; Sumida, Iori; Takahashi, Yutaka
Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimummore » jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy affected by respiratory phase was improved in the minimum jerk theoretical model by 7.0% and 13% compared with that of the waveforms modeled by cosine and free-breathing model, respectively. Conclusions: The minimum jerk theoretical respiratory wave can achieve smooth tracking by CyberKnife{sup ®} and may provide patient-specific respiratory modeling, which may be useful for respiratory training and coaching, as well as quality assurance of the mechanical CyberKnife{sup ®} robotic trajectory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Panchenko, Alexander
2016-01-01
We present a novel formulation of the Pairwise Force Smoothed Particle Hydrodynamics Model (PF-SPH) and use it to simulate two- and three-phase flows in bounded domains. In the PF-SPH model, the Navier-Stokes equations are discretized with the Smoothed Particle Hydrodynamics (SPH) method and the Young-Laplace boundary condition at the fluid-fluid interface and the Young boundary condition at the fluid-fluid-solid interface are replaced with pairwise forces added into the Navier-Stokes equations. We derive a relationship between the parameters in the pairwise forces and the surface tension and static contact angle. Next, we demonstrate the accuracy of the model under static andmore » dynamic conditions. Finally, to demonstrate the capabilities and robustness of the model we use it to simulate flow of three fluids in a porous material.« less
The multiple complex exponential model and its application to EEG analysis
NASA Astrophysics Data System (ADS)
Chen, Dao-Mu; Petzold, J.
The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul
2009-01-01
This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
The Koslowski-Sahlmann representation: quantum configuration space
NASA Astrophysics Data System (ADS)
Campiglia, Miguel; Varadarajan, Madhavan
2014-09-01
The Koslowski-Sahlmann (KS) representation is a generalization of the representation underlying the discrete spatial geometry of loop quantum gravity (LQG), to accommodate states labelled by smooth spatial geometries. As shown recently, the KS representation supports, in addition to the action of the holonomy and flux operators, the action of operators which are the quantum counterparts of certain connection dependent functions known as ‘background exponentials’. Here we show that the KS representation displays the following properties which are the exact counterparts of LQG ones: (i) the abelian * algebra of SU(2) holonomies and ‘U(1)’ background exponentials can be completed to a C* algebra, (ii) the space of semianalytic SU(2) connections is topologically dense in the spectrum of this algebra, (iii) there exists a measure on this spectrum for which the KS Hilbert space is realized as the space of square integrable functions on the spectrum, (iv) the spectrum admits a characterization as a projective limit of finite numbers of copies of SU(2) and U(1), (v) the algebra underlying the KS representation is constructed from cylindrical functions and their derivations in exactly the same way as the LQG (holonomy-flux) algebra except that the KS cylindrical functions depend on the holonomies and the background exponentials, this extra dependence being responsible for the differences between the KS and LQG algebras. While these results are obtained for compact spaces, they are expected to be of use for the construction of the KS representation in the asymptotically flat case.
NASA Astrophysics Data System (ADS)
Farcas, A.; Resmerita, A.-M.; Farcas, F.
2016-12-01
Optical, electrochemical and surface-morphological properties of three terpolymer polyrotaxanes (1a, 1b and 1c) composed of 2,7-dibromo-9,9-dicyanomethylenefluorene encapsulated into γ-cyclodextrin (γCD), β- or γ-persilylated cyclodextrin (PS-γCD, PS-γCD) cavities (acceptor) and 4,4'-dibromo-4''-methyltriphenylamine (donor) randomly distributed into 9,9-dioctylfluorene conjugated chains have been evaluated and compared to those of the reference 1. The role of the encapsulation on the thermal stability, solubility, film forming ability and transparency was also investigated. High fluorescence efficiency, almost identical normalized absorbance maximum in solution and solid-states of 1a, 1b and 1c provides the lower aggregation tendency. The fluorescence lifetimes (τ) of 1a, 1b and 1c follow a mono-exponential decay with a value τ = 1.11, 1.03 and 1.14 ns, compared with the neat 1, where a bi-exponential decay was identified. AFM studies reveal a smooth and homogenous surface morphology for polyrotaxanes than that of the reference. The electrochemical data provided that the investigated compounds exhibited n- and p-doping processes. The HOMO/LUMO energy levels 1a, 1b, 1c and 1 and in combination with the work function of anodic ITO glass substrates coated with poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) (-5.2 eV) and cathodic Ca (-2.8 eV) or Al (-2.2 eV) indicate that the compounds are electrochemically accessible as electron-transporting materials.
Accounting for inherent variability of growth in microbial risk assessment.
Marks, H M; Coleman, M E
2005-04-15
Risk assessments of pathogens need to account for the growth of small number of cells under varying conditions. In order to determine the possible risks that occur when there are small numbers of cells, stochastic models of growth are needed that would capture the distribution of the number of cells over replicate trials of the same scenario or environmental conditions. This paper provides a simple stochastic growth model, accounting only for inherent cell-growth variability, assuming constant growth kinetic parameters, for an initial, small, numbers of cells assumed to be transforming from a stationary to an exponential phase. Two, basic, microbial sets of assumptions are considered: serial, where it is assume that cells transform through a lag phase before entering the exponential phase of growth; and parallel, where it is assumed that lag and exponential phases develop in parallel. The model is based on, first determining the distribution of the time when growth commences, and then modelling the conditional distribution of the number of cells. For the latter distribution, it is found that a Weibull distribution provides a simple approximation to the conditional distribution of the relative growth, so that the model developed in this paper can be easily implemented in risk assessments using commercial software packages.
Infinite-disorder critical points of models with stretched exponential interactions
NASA Astrophysics Data System (ADS)
Juhász, Róbert
2014-09-01
We show that an interaction decaying as a stretched exponential function of distance, J(l)˜ e-cl^a , is able to alter the universality class of short-range systems having an infinite-disorder critical point. To do so, we study the low-energy properties of the random transverse-field Ising chain with the above form of interaction by a strong-disorder renormalization group (SDRG) approach. We find that the critical behavior of the model is controlled by infinite-disorder fixed points different from those of the short-range model if 0 < a < 1/2. In this range, the critical exponents calculated analytically by a simplified SDRG scheme are found to vary with a, while, for a > 1/2, the model belongs to the same universality class as its short-range variant. The entanglement entropy of a block of size L increases logarithmically with L at the critical point but, unlike the short-range model, the prefactor is dependent on disorder in the range 0 < a < 1/2. Numerical results obtained by an improved SDRG scheme are found to be in agreement with the analytical predictions. The same fixed points are expected to describe the critical behavior of, among others, the random contact process with stretched exponentially decaying activation rates.
Global exponential stability for switched memristive neural networks with time-varying delays.
Xin, Youming; Li, Yuxia; Cheng, Zunshui; Huang, Xia
2016-08-01
This paper considers the problem of exponential stability for switched memristive neural networks (MNNs) with time-varying delays. Different from most of the existing papers, we model a memristor as a continuous system, and view switched MNNs as switched neural networks with uncertain time-varying parameters. Based on average dwell time technique, mode-dependent average dwell time technique and multiple Lyapunov-Krasovskii functional approach, two conditions are derived to design the switching signal and guarantee the exponential stability of the considered neural networks, which are delay-dependent and formulated by linear matrix inequalities (LMIs). Finally, the effectiveness of the theoretical results is demonstrated by two numerical examples. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Inflationary universe in deformed phase space scenario
NASA Astrophysics Data System (ADS)
Rasouli, S. M. M.; Saba, Nasim; Farhoudi, Mehrdad; Marto, João; Moniz, P. V.
2018-06-01
We consider a noncommutative (NC) inflationary model with a homogeneous scalar field minimally coupled to gravity. The particular NC inflationary setting herein proposed, produces entirely new consequences as summarized in what follows. We first analyze the free field case and subsequently examine the situation where the scalar field is subjected to a polynomial and exponential potentials. We propose to use a canonical deformation between momenta, in a spatially flat Friedmann-Lemaî tre-Robertson-Walker (FLRW) universe, and while the Friedmann equation (Hamiltonian constraint) remains unaffected the Friedmann acceleration equation (and thus the Klein-Gordon equation) is modified by an extra term linear in the NC parameter. This concrete noncommutativity on the momenta allows interesting dynamics that other NC models seem not to allow. Let us be more precise. This extra term behaves as the sole explicit pressure that under the right circumstances implies a period of accelerated expansion of the universe. We find that in the absence of the scalar field potential, and in contrast with the commutative case, in which the scale factor always decelerates, we obtain an inflationary phase for small negative values of the NC parameter. Subsequently, the period of accelerated expansion is smoothly replaced by an appropriate deceleration phase providing an interesting model regarding the graceful exit problem in inflationary models. This last property is present either in the free field case or under the influence of the scalar field potentials considered here. Moreover, in the case of the free scalar field, we show that not only the horizon problem is solved but also there is some resemblance between the evolution equation of the scale factor associated to our model and that for the R2 (Starobinsky) inflationary model. Therefore, our herein NC model not only can be taken as an appropriate scenario to get a successful kinetic inflation, but also is a convenient setting to obtain inflationary universe possessing the graceful exit when scalar field potentials are present.
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion
Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.
2013-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.
Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M
2012-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.
Exponentially growing tearing modes in Rijnhuizen Tokamak Project plasmas.
Salzedas, F; Schüller, F C; Oomens, A A M
2002-02-18
The local measurement of the island width w, around the resonant surface, allowed a direct test of the extended Rutherford model [P. H. Rutherford, PPPL Report-2277 (1985)], describing the evolution of radiation-induced tearing modes prior to disruptions of tokamak plasmas. It is found that this model accounts very well for the observed exponential growth and supports radiation losses as being the main driving mechanism. The model implies that the effective perpendicular electron heat conductivity in the island is smaller than the global one. Comparison of the local measurements of w with the magnetic perturbed field B showed that w proportional to B1/2 was valid for widths up to 18% of the minor radius.
NASA Astrophysics Data System (ADS)
Adame, J.; Warzel, S.
2015-11-01
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adame, J.; Warzel, S., E-mail: warzel@ma.tum.de
In this note, we use ideas of Farhi et al. [Int. J. Quantum. Inf. 6, 503 (2008) and Quantum Inf. Comput. 11, 840 (2011)] who link a lower bound on the run time of their quantum adiabatic search algorithm to an upper bound on the energy gap above the ground-state of the generators of this algorithm. We apply these ideas to the quantum random energy model (QREM). Our main result is a simple proof of the conjectured exponential vanishing of the energy gap of the QREM.
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Pedro, Francisco G.; Westphal, Alexander
2015-03-16
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Disentangling the f(R)-duality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broy, Benedict J.; Westphal, Alexander; Pedro, Francisco G., E-mail: benedict.broy@desy.de, E-mail: francisco.pedro@desy.de, E-mail: alexander.westphal@desy.de
2015-03-01
Motivated by UV realisations of Starobinsky-like inflation models, we study generic exponential plateau-like potentials to understand whether an exact f(R)-formulation may still be obtained when the asymptotic shift-symmetry of the potential is broken for larger field values. Potentials which break the shift symmetry with rising exponentials at large field values only allow for corresponding f(R)-descriptions with a leading order term R{sup n} with 1
Exponentially Stabilizing Robot Control Laws
NASA Technical Reports Server (NTRS)
Wen, John T.; Bayard, David S.
1990-01-01
New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.
Testing predictions of the quantum landscape multiverse 2: the exponential inflationary potential
NASA Astrophysics Data System (ADS)
Di Valentino, Eleonora; Mersini-Houghton, Laura
2017-03-01
The 2015 Planck data release tightened the region of the allowed inflationary models. Inflationary models with convex potentials have now been ruled out since they produce a large tensor to scalar ratio. Meanwhile the same data offers interesting hints on possible deviations from the standard picture of CMB perturbations. Here we revisit the predictions of the theory of the origin of the universe from the landscape multiverse for the case of exponential inflation, for two reasons: firstly to check the status of the anomalies associated with this theory, in the light of the recent Planck data; secondly, to search for a counterexample whereby new physics modifications may bring convex inflationary potentials, thought to have been ruled out, back into the region of potentials allowed by data. Using the exponential inflation as an example of convex potentials, we find that the answer to both tests is positive: modifications to the perturbation spectrum and to the Newtonian potential of the universe originating from the quantum entanglement, bring the exponential potential, back within the allowed region of current data; and, the series of anomalies previously predicted in this theory, is still in good agreement with current data. Hence our finding for this convex potential comes at the price of allowing for additional thermal relic particles, equivalently dark radiation, in the early universe.
NASA Astrophysics Data System (ADS)
Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.
2010-12-01
Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.
Probing Gamma-ray Emission of Geminga and Vela with Non-stationary Models
NASA Astrophysics Data System (ADS)
Chai, Yating; Cheng, Kwong-Sang; Takata, Jumpei
2016-06-01
It is generally believed that the high energy emissions from isolated pulsars are emitted from relativistic electrons/positrons accelerated in outer magnetospheric accelerators (outergaps) via a curvature radiation mechanism, which has a simple exponential cut-off spectrum. However, many gamma-ray pulsars detected by the Fermi LAT (Large Area Telescope) cannot be fitted by simple exponential cut-off spectrum, and instead a sub-exponential is more appropriate. It is proposed that the realistic outergaps are non-stationary, and that the observed spectrum is a superposition of different stationary states that are controlled by the currents injected from the inner and outer boundaries. The Vela and Geminga pulsars have the largest fluxes among all targets observed, which allows us to carry out very detailed phase-resolved spectral analysis. We have divided the Vela and Geminga pulsars into 19 (the off pulse of Vela was not included) and 33 phase bins, respectively. We find that most phase resolved spectra still cannot be fitted by a simple exponential spectrum: in fact, a sub-exponential spectrum is necessary. We conclude that non-stationary states exist even down to the very fine phase bins.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
ERIC Educational Resources Information Center
Huang, Long-Sheng; Huang, Chung-Fah
2017-01-01
Using the technology acceptance model (TAM) as its theoretical foundation, this study intends to explore the use of Travelling Beam devices in road engineerings in Taiwan and offer suggestions based on its findings to encourage industry willingness for device deployment resulting in improving road pavement smoothness in Taiwan. The study subjects…
Comparative Analyses of Creep Models of a Solid Propellant
NASA Astrophysics Data System (ADS)
Zhang, J. B.; Lu, B. J.; Gong, S. F.; Zhao, S. P.
2018-05-01
The creep experiments of a solid propellant samples under five different stresses are carried out at 293.15 K and 323.15 K. In order to express the creep properties of this solid propellant, the viscoelastic model i.e. three Parameters solid, three Parameters fluid, four Parameters solid, four Parameters fluid and exponential model are involved. On the basis of the principle of least squares fitting, and different stress of all the parameters for the models, the nonlinear fitting procedure can be used to analyze the creep properties. The study shows that the four Parameters solid model can best express the behavior of creep properties of the propellant samples. However, the three Parameters solid and exponential model cannot very well reflect the initial value of the creep process, while the modified four Parameters models are found to agree well with the acceleration characteristics of the creep process.
Zhou, Jingwen; Xu, Zhenghong; Chen, Shouwen
2013-04-01
The thuringiensin abiotic degradation processes in aqueous solution under different conditions, with a pH range of 5.0-9.0 and a temperature range of 10-40°C, were systematically investigated by an exponential decay model and a radius basis function (RBF) neural network model, respectively. The half-lives of thuringiensin calculated by the exponential decay model ranged from 2.72 d to 16.19 d under the different conditions mentioned above. Furthermore, an RBF model with accuracy of 0.1 and SPREAD value 5 was employed to model the degradation processes. The results showed that the model could simulate and predict the degradation processes well. Both the half-lives and the prediction data showed that thuringiensin was an easily degradable antibiotic, which could be an important factor in the evaluation of its safety. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zuthi, Mst Fazana Rahman; Guo, Wenshan; Ngo, Huu Hao; Nghiem, Duc Long; Hai, Faisal I; Xia, Siqing; Li, Jianxin; Li, Jixiang; Liu, Yi
2017-08-01
This study aimed to develop a practical semi-empirical mathematical model of membrane fouling that accounts for cake formation on the membrane and its pore blocking as the major processes of membrane fouling. In the developed model, the concentration of mixed liquor suspended solid is used as a lumped parameter to describe the formation of cake layer including the biofilm. The new model considers the combined effect of aeration and backwash on the foulants' detachment from the membrane. New exponential coefficients are also included in the model to describe the exponential increase of transmembrane pressure that typically occurs after the initial stage of an MBR operation. The model was validated using experimental data obtained from a lab-scale aerobic sponge-submerged membrane bioreactor (MBR), and the simulation of the model agreed well with the experimental findings. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mapping snow depth return levels: smooth spatial modeling versus station interpolation
NASA Astrophysics Data System (ADS)
Blanchet, J.; Lehning, M.
2010-12-01
For adequate risk management in mountainous countries, hazard maps for extreme snow events are needed. This requires the computation of spatial estimates of return levels. In this article we use recent developments in extreme value theory and compare two main approaches for mapping snow depth return levels from in situ measurements. The first one is based on the spatial interpolation of pointwise extremal distributions (the so-called Generalized Extreme Value distribution, GEV henceforth) computed at station locations. The second one is new and based on the direct estimation of a spatially smooth GEV distribution with the joint use of all stations. We compare and validate the different approaches for modeling annual maximum snow depth measured at 100 sites in Switzerland during winters 1965-1966 to 2007-2008. The results show a better performance of the smooth GEV distribution fitting, in particular where the station network is sparser. Smooth return level maps can be computed from the fitted model without any further interpolation. Their regional variability can be revealed by removing the altitudinal dependent covariates in the model. We show how return levels and their regional variability are linked to the main climatological patterns of Switzerland.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
Particle systems for adaptive, isotropic meshing of CAD models
Levine, Joshua A.; Whitaker, Ross T.
2012-01-01
We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181
Modelling airway smooth muscle passive length adaptation via thick filament length distributions
Donovan, Graham M.
2013-01-01
We present a new model of airway smooth muscle (ASM), which surrounds and constricts every airway in the lung and thus plays a central role in the airway constriction associated with asthma. This new model of ASM is based on an extension of sliding filament/crossbridge theory, which explicitly incorporates the length distribution of thick sliding filaments to account for a phenomenon known as dynamic passive length adaptation; the model exhibits good agreement with experimental data for ASM force–length behaviour across multiple scales. Principally these are (nonlinear) force–length loops at short timescales (seconds), parabolic force–length curves at medium timescales (minutes) and length adaptation at longer timescales. This represents a significant improvement on the widely-used cross-bridge models which work so well in or near the isometric regime, and may have significant implications for studies which rely on crossbridge or other dynamic airway smooth muscle models, and thus both airway and lung dynamics. PMID:23721681
A piecewise smooth model of evolutionary game for residential mobility and segregation
NASA Astrophysics Data System (ADS)
Radi, D.; Gardini, L.
2018-05-01
The paper proposes an evolutionary version of a Schelling-type dynamic system to model the patterns of residential segregation when two groups of people are involved. The payoff functions of agents are the individual preferences for integration which are empirically grounded. Differently from Schelling's model, where the limited levels of tolerance are the driving force of segregation, in the current setup agents benefit from integration. Despite the differences, the evolutionary model shows a dynamics of segregation that is qualitatively similar to the one of the classical Schelling's model: segregation is always a stable equilibrium, while equilibria of integration exist only for peculiar configurations of the payoff functions and their asymptotic stability is highly sensitive to parameter variations. Moreover, a rich variety of integrated dynamic behaviors can be observed. In particular, the dynamics of the evolutionary game is regulated by a one-dimensional piecewise smooth map with two kink points that is rigorously analyzed using techniques recently developed for piecewise smooth dynamical systems. The investigation reveals that when a stable internal equilibrium exists, the bimodal shape of the map leads to several different kinds of bifurcations, smooth, and border collision, in a complicated interplay. Our global analysis can give intuitions to be used by a social planner to maximize integration through social policies that manipulate people's preferences for integration.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
Kinematics, structural mechanics, and design of origami structures with smooth folds
NASA Astrophysics Data System (ADS)
Peraza Hernandez, Edwin Alexander
Origami provides novel approaches to the fabrication, assembly, and functionality of engineering structures in various fields such as aerospace, robotics, etc. With the increase in complexity of the geometry and materials for origami structures that provide engineering utility, computational models and design methods for such structures have become essential. Currently available models and design methods for origami structures are generally limited to the idealization of the folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures having non-negligible thickness or maximum curvature at the folds restricted by material limitations. Thus, for general structures, creased folds of merely zeroth-order geometric continuity are not appropriate representations of structural response and a new approach is needed. The first contribution of this dissertation is a model for the kinematics of origami structures having realistic folds of non-zero surface area and exhibiting higher-order geometric continuity, here termed smooth folds. The geometry of the smooth folds and the constraints on their associated kinematic variables are presented. A numerical implementation of the model allowing for kinematic simulation of structures having arbitrary fold patterns is also described. Examples illustrating the capability of the model to capture realistic structural folding response are provided. Subsequently, a method for solving the origami design problem of determining the geometry of a single planar sheet and its pattern of smooth folds that morphs into a given three-dimensional goal shape, discretized as a polygonal mesh, is presented. The design parameterization of the planar sheet and the constraints that allow for a valid pattern of smooth folds and approximation of the goal shape in a known folded configuration are presented. Various testing examples considering goal shapes of diverse geometries are provided. Afterwards, a model for the structural mechanics of origami continuum bodies with smooth folds is presented. Such a model entails the integration of the presented kinematic model and existing plate theories in order to obtain a structural representation for folds having non-zero thickness and comprised of arbitrary materials. The model is validated against finite element analysis. The last contribution addresses the design and analysis of active material-based self-folding structures that morph via simultaneous folding towards a given three-dimensional goal shape starting from a planar configuration. Implementation examples including shape memory alloy (SMA)-based self-folding structures are provided.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
Vogl, Matthias; Leidl, Reiner
2016-05-01
The planning of health care management benefits from understanding future trends in demand and costs. In the case of lung diseases in the national German hospital market, we therefore analyze the current structure of care, and forecast future trends in key process indicators. We use standardized, patient-level, activity-based costing from a national cost calculation data set of respiratory cases, representing 11.9-14.1 % of all cases in the major diagnostic category "respiratory system" from 2006 to 2012. To forecast hospital admissions, length of stay (LOS), and costs, the best adjusted models out of possible autoregressive integrated moving average models and exponential smoothing models are used. The number of cases is predicted to increase substantially, from 1.1 million in 2006 to 1.5 million in 2018 (+2.7 % each year). LOS is expected to decrease from 7.9 to 6.1 days, and overall costs to increase from 2.7 to 4.5 billion euros (+4.3 % each year). Except for lung cancer (-2.3 % each year), costs for all respiratory disease areas increase: surgical interventions +9.2 % each year, COPD +3.9 %, bronchitis and asthma +1.7 %, infections +2.0 %, respiratory failure +2.6 %, and other diagnoses +8.5 % each year. The share of costs of surgical interventions in all costs of respiratory cases increases from 17.8 % in 2006 to 30.8 % in 2018. Overall costs are expected to increase particularly because of an increasing share of expensive surgical interventions and rare diseases, and because of higher intensive care, operating room, and diagnostics and therapy costs.
Modelling wildland fire propagation by tracking random fronts
NASA Astrophysics Data System (ADS)
Pagnini, G.; Mentrelli, A.
2013-11-01
Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.
Earthquake Potential Models for China
NASA Astrophysics Data System (ADS)
Rong, Y.; Jackson, D. D.
2002-12-01
We present three earthquake potential estimates for magnitude 5.4 and larger earthquakes for China. The potential is expressed as the rate density (probability per unit area, magnitude and time). The three methods employ smoothed seismicity-, geologic slip rate-, and geodetic strain rate data. We tested all three estimates, and the published Global Seismic Hazard Assessment Project (GSHAP) model, against earthquake data. We constructed a special earthquake catalog which combines previous catalogs covering different times. We used the special catalog to construct our smoothed seismicity model and to evaluate all models retrospectively. All our models employ a modified Gutenberg-Richter magnitude distribution with three parameters: a multiplicative ``a-value," the slope or ``b-value," and a ``corner magnitude" marking a strong decrease of earthquake rate with magnitude. We assumed the b-value to be constant for the whole study area and estimated the other parameters from regional or local geophysical data. The smoothed seismicity method assumes that the rate density is proportional to the magnitude of past earthquakes and approximately as the reciprocal of the epicentral distance out to a few hundred kilometers. We derived the upper magnitude limit from the special catalog and estimated local a-values from smoothed seismicity. Earthquakes since January 1, 2000 are quite compatible with the model. For the geologic forecast we adopted the seismic source zones (based on geological, geodetic and seismicity data) of the GSHAP model. For each zone, we estimated a corner magnitude by applying the Wells and Coppersmith [1994] relationship to the longest fault in the zone, and we determined the a-value from fault slip rates and an assumed locking depth. The geological model fits the earthquake data better than the GSHAP model. We also applied the Wells and Coppersmith relationship to individual faults, but the results conflicted with the earthquake record. For our geodetic model we derived the uniform upper magnitude limit from the special catalog and assumed local a-values proportional to maximum horizontal strain rate. In prospective tests the geodetic model agrees well with earthquake occurrence. The smoothed seismicity model performs best of the four models.
Evidence for a scale-limited low-frequency earthquake source process
NASA Astrophysics Data System (ADS)
Chestler, S. R.; Creager, K. C.
2017-04-01
We calculate the seismic moments for 34,264 low-frequency earthquakes (LFEs) beneath the Olympic Peninsula, Washington. LFE moments range from 1.4 × 1010 to 1.9 × 1012 N m (Mw = 0.7-2.1). While regular earthquakes follow a power law moment-frequency distribution with a b value near 1 (the number of events increases by a factor of 10 for each unit increase in Mw), we find that while for large LFEs the b value is 6, for small LFEs it is <1. The magnitude-frequency distribution for all LFEs is best fit by an exponential distribution with a mean seismic moment (characteristic moment) of 2.0 × 1011 N m. The moment-frequency distributions for each of the 43 LFE families, or spots on the plate interface where LFEs repeat, can also be fit by exponential distributions. An exponential moment-frequency distribution implies a scale-limited source process. We consider two end-member models where LFE moment is limited by (1) the amount of slip or (2) slip area. We favor the area-limited model. Based on the observed exponential distribution of LFE moment and geodetically observed total slip, we estimate that the total area that slips within an LFE family has a diameter of 300 m. Assuming an area-limited model, we estimate the slips, subpatch diameters, stress drops, and slip rates for LFEs during episodic tremor and slip events. We allow for LFEs to rupture smaller subpatches within the LFE family patch. Models with 1-10 subpatches produce slips of 0.1-1 mm, subpatch diameters of 80-275 m, and stress drops of 30-1000 kPa. While one subpatch is often assumed, we believe 3-10 subpatches are more likely.
Water diffusion in silicate glasses: the effect of glass structure
NASA Astrophysics Data System (ADS)
Kuroda, M.; Tachibana, S.
2016-12-01
Water diffusion in silicate melts (glasses) is one of the main controlling factors of magmatism in a volcanic system. Water diffusivity in silicate glasses depends on its own concentration. However, the mechanism causing those dependences has not been fully understood yet. In order to construct a general model for water diffusion in various silicate glasses, we performed water diffusion experiments in silica glass and proposed a new water diffusion model [Kuroda et al., 2015]. In the model, water diffusivity is controlled by the concentration of both main diffusion species (i.e. molecular water) and diffusion pathways, which are determined by the concentrations of hydroxyl groups and network modifier cations. The model well explains the water diffusivity in various silicate glasses from silica glass to basalt glass. However, pre-exponential factors of water diffusivity in various glasses show five orders of magnitude variations although the pre-exponential factor should ideally represent the jump frequency and the jump distance of molecular water and show a much smaller variation. Here, we attribute the large variation of pre-exponential factors to a glass structure dependence of activation energy for molecular water diffusion. It has been known that the activation energy depends on the water concentration [Nowak and Behrens, 1997]. The concentration of hydroxyls, which cut Si-O-Si network in the glass structure, increases with water concentration, resulting in lowering the activation energy for water diffusion probably due to more fragmented structure. Network modifier cations are likely to play the same role as water. With taking the effect of glass structure into account, we found that the variation of pre-exponential factors of water diffusivity in silicate glasses can be much smaller than the five orders of magnitude, implying that the diffusion of molecular water in silicate glasses is controlled by the same atomic process.
Mathematical modeling of drying of pretreated and untreated pumpkin.
Tunde-Akintunde, T Y; Ogunlakin, G O
2013-08-01
In this study, drying characteristics of pretreated and untreated pumpkin were examined in a hot-air dryer at air temperatures within a range of 40-80 °C and a constant air velocity of 1.5 m/s. The drying was observed to be in the falling-rate drying period and thus liquid diffusion is the main mechanism of moisture movement from the internal regions to the product surface. The experimental drying data for the pumpkin fruits were used to fit Exponential, General exponential, Logarithmic, Page, Midilli-Kucuk and Parabolic model and the statistical validity of models tested were determined by non-linear regression analysis. The Parabolic model had the highest R(2) and lowest χ(2) and RMSE values. This indicates that the Parabolic model is appropriate to describe the dehydration behavior for the pumpkin.
Cosmological models constructed by van der Waals fluid approximation and volumetric expansion
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Myrzakulov, R.
The universe modeled with van der Waals fluid approximation, where the van der Waals fluid equation of state contains a single parameter ωv. Analytical solutions to the Einstein’s field equations are obtained by assuming the mean scale factor of the metric follows volumetric exponential and power-law expansions. The model describes a rapid expansion where the acceleration grows in an exponential way and the van der Waals fluid behaves like an inflation for an initial epoch of the universe. Also, the model describes that when time goes away the acceleration is positive, but it decreases to zero and the van der Waals fluid approximation behaves like a present accelerated phase of the universe. Finally, it is observed that the model contains a type-III future singularity for volumetric power-law expansion.
Ghatage, Dhairyasheel; Chatterji, Apratim
2013-10-01
We introduce a method to obtain steady-state uniaxial exponential-stretching flow of a fluid (akin to extensional flow) in the incompressible limit, which enables us to study the response of suspended macromolecules to the flow by computer simulations. The flow field in this flow is defined by v(x) = εx, where v(x) is the velocity of the fluid and ε is the stretch flow gradient. To eliminate the effect of confining boundaries, we produce the flow in a channel of uniform square cross section with periodic boundary conditions in directions perpendicular to the flow, but simultaneously maintain uniform density of fluid along the length of the tube. In experiments a perfect elongational flow is obtained only along the axis of symmetry in a four-roll geometry or a filament-stretching rheometer. We can reproduce flow conditions very similar to extensional flow near the axis of symmetry by exponential-stretching flow; we do this by adding the right amounts of fluid along the length of the flow in our simulations. The fluid particles added along the length of the tube are the same fluid particles which exit the channel due to the flow; thus mass conservation is maintained in our model by default. We also suggest a scheme for possible realization of exponential-stretching flow in experiments. To establish our method as a useful tool to study various soft matter systems in extensional flow, we embed (i) spherical colloids with excluded volume interactions (modeled by the Weeks-Chandler potential) as well as (ii) a bead-spring model of star polymers in the fluid to study their responses to the exponential-stretched flow and show that the responses of macromolecules in the two flows are very similar. We demonstrate that the variation of number density of the suspended colloids along the direction of flow is in tune with our expectations. We also conclude from our study of the deformation of star polymers with different numbers of arms f that the critical flow gradient ε(c) at which the star undergoes the coil-to-stretch transition is independent of f for f = 2,5,10, and 20.
Fast and accurate fitting and filtering of noisy exponentials in Legendre space.
Bao, Guobin; Schild, Detlev
2014-01-01
The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.
Barba, Lida; Rodríguez, Nibaldo; Montt, Cecilia
2014-01-01
Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. The strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. The proposed models are evaluated using a weekly time series of traffic accidents of Valparaíso, Chilean region, from 2003 to 2012. The best result is given by the combination HSVD-ARIMA, with a MAPE of 0:26%, followed by MA-ARIMA with a MAPE of 1:12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15:51%.
NASA Technical Reports Server (NTRS)
Hawkins, Lawrence Allen
1988-01-01
Experimental results for the rotordynamic stiffness and damping coefficients of a labyrinth -rotor honeycomb-stator seal are presented. The coefficients are compared to the coefficients of a labyrinth-rotor smooth-stator seal having the same geometry. The coefficients are compared to analytical results from a two-control-volume compressible flow model. The experimental results show that the honeycomb stator configuration is more stable than the smooth stator configuration at low rotor speeds. At high rotor speeds and low clearance, the smooth stator seal is more stable. The theoretical model predicts the cross-coupled stiffness of the honeycomb stator seal correctly within 25 percent of measured values. The model provides accurate predictions of direct damping for large clearance seals. Overall, the model does not perform as well for low clearance seals as for high clearance seals.
Campbell, D A; Chkrebtii, O
2013-12-01
Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.