Sample records for exponential smoothing method

  1. Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    1986-01-01

    Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)

  2. [Application of exponential smoothing method in prediction and warning of epidemic mumps].

    PubMed

    Shi, Yun-ping; Ma, Jia-qi

    2010-06-01

    To analyze the daily data of epidemic Mumps in a province from 2004 to 2008 and set up exponential smoothing model for the prediction. To predict and warn the epidemic mumps in 2008 through calculating 7-day moving summation and removing the effect of weekends to the data of daily reported mumps cases during 2005-2008 and exponential summation to the data from 2005 to 2007. The performance of Holt-Winters exponential smoothing is good. The result of warning sensitivity was 76.92%, specificity was 83.33%, and timely rate was 80%. It is practicable to use exponential smoothing method to warn against epidemic Mumps.

  3. Performance of time-series methods in forecasting the demand for red blood cell transfusion.

    PubMed

    Pereira, Arturo

    2004-05-01

    Planning the future blood collection efforts must be based on adequate forecasts of transfusion demand. In this study, univariate time-series methods were investigated for their performance in forecasting the monthly demand for RBCs at one tertiary-care, university hospital. Three time-series methods were investigated: autoregressive integrated moving average (ARIMA), the Holt-Winters family of exponential smoothing models, and one neural-network-based method. The time series consisted of the monthly demand for RBCs from January 1988 to December 2002 and was divided into two segments: the older one was used to fit or train the models, and the younger to test for the accuracy of predictions. Performance was compared across forecasting methods by calculating goodness-of-fit statistics, the percentage of months in which forecast-based supply would have met the RBC demand (coverage rate), and the outdate rate. The RBC transfusion series was best fitted by a seasonal ARIMA(0,1,1)(0,1,1)(12) model. Over 1-year time horizons, forecasts generated by ARIMA or exponential smoothing laid within the +/- 10 percent interval of the real RBC demand in 79 percent of months (62% in the case of neural networks). The coverage rate for the three methods was 89, 91, and 86 percent, respectively. Over 2-year time horizons, exponential smoothing largely outperformed the other methods. Predictions by exponential smoothing laid within the +/- 10 percent interval of real values in 75 percent of the 24 forecasted months, and the coverage rate was 87 percent. Over 1-year time horizons, predictions of RBC demand generated by ARIMA or exponential smoothing are accurate enough to be of help in the planning of blood collection efforts. For longer time horizons, exponential smoothing outperforms the other forecasting methods.

  4. How bootstrap can help in forecasting time series with more than one seasonal pattern

    NASA Astrophysics Data System (ADS)

    Cordeiro, Clara; Neves, M. Manuela

    2012-09-01

    The search for the future is an appealing challenge in time series analysis. The diversity of forecasting methodologies is inevitable and is still in expansion. Exponential smoothing methods are the launch platform for modelling and forecasting in time series analysis. Recently this methodology has been combined with bootstrapping revealing a good performance. The algorithm (Boot. EXPOS) using exponential smoothing and bootstrap methodologies, has showed promising results for forecasting time series with one seasonal pattern. In case of more than one seasonal pattern, the double seasonal Holt-Winters methods and the exponential smoothing methods were developed. A new challenge was now to combine these seasonal methods with bootstrap and carry over a similar resampling scheme used in Boot. EXPOS procedure. The performance of such partnership will be illustrated for some well-know data sets existing in software.

  5. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  6. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  7. Smooth centile curves for skew and kurtotic data modelled using the Box-Cox power exponential distribution.

    PubMed

    Rigby, Robert A; Stasinopoulos, D Mikis

    2004-10-15

    The Box-Cox power exponential (BCPE) distribution, developed in this paper, provides a model for a dependent variable Y exhibiting both skewness and kurtosis (leptokurtosis or platykurtosis). The distribution is defined by a power transformation Y(nu) having a shifted and scaled (truncated) standard power exponential distribution with parameter tau. The distribution has four parameters and is denoted BCPE (mu,sigma,nu,tau). The parameters, mu, sigma, nu and tau, may be interpreted as relating to location (median), scale (approximate coefficient of variation), skewness (transformation to symmetry) and kurtosis (power exponential parameter), respectively. Smooth centile curves are obtained by modelling each of the four parameters of the distribution as a smooth non-parametric function of an explanatory variable. A Fisher scoring algorithm is used to fit the non-parametric model by maximizing a penalized likelihood. The first and expected second and cross derivatives of the likelihood, with respect to mu, sigma, nu and tau, required for the algorithm, are provided. The centiles of the BCPE distribution are easy to calculate, so it is highly suited to centile estimation. This application of the BCPE distribution to smooth centile estimation provides a generalization of the LMS method of the centile estimation to data exhibiting kurtosis (as well as skewness) different from that of a normal distribution and is named here the LMSP method of centile estimation. The LMSP method of centile estimation is applied to modelling the body mass index of Dutch males against age. 2004 John Wiley & Sons, Ltd.

  8. Demand Forecasting: DLA’S Aviation Supply Chain High Value Products

    DTIC Science & Technology

    2015-04-09

    program at USS CONSTELLATION (CV 64), San Diego CA LCDR Carlos Lopez Education  MBA in Supply Chain Management, Naval Postgraduate School  BS in...Exponential Smoothing Forecasts ............... 118 xv Figure 80. NIIN 01-463-4340 Seasonal Exponential Smoothing Forecast .............. 119 Figure...5310 Seasonal Exponential Smoothing ............................ 142 Figure 102. NIIN 01-507-5310 12-Month Forecast Simulation

  9. Arima model and exponential smoothing method: A comparison

    NASA Astrophysics Data System (ADS)

    Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri

    2013-04-01

    This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.

  10. Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    ERIC Educational Resources Information Center

    Ravinder, Handanhal V.

    2013-01-01

    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure…

  11. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  12. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  13. An optimized Nash nonlinear grey Bernoulli model based on particle swarm optimization and its application in prediction for the incidence of Hepatitis B in Xinjiang, China.

    PubMed

    Zhang, Liping; Zheng, Yanling; Wang, Kai; Zhang, Xueliang; Zheng, Yujian

    2014-06-01

    In this paper, by using a particle swarm optimization algorithm to solve the optimal parameter estimation problem, an improved Nash nonlinear grey Bernoulli model termed PSO-NNGBM(1,1) is proposed. To test the forecasting performance, the optimized model is applied for forecasting the incidence of hepatitis B in Xinjiang, China. Four models, traditional GM(1,1), grey Verhulst model (GVM), original nonlinear grey Bernoulli model (NGBM(1,1)) and Holt-Winters exponential smoothing method, are also established for comparison with the proposed model under the criteria of mean absolute percentage error and root mean square percent error. The prediction results show that the optimized NNGBM(1,1) model is more accurate and performs better than the traditional GM(1,1), GVM, NGBM(1,1) and Holt-Winters exponential smoothing method. Copyright © 2014. Published by Elsevier Ltd.

  14. Using Exponential Smoothing to Specify Intervention Models for Interrupted Time Series.

    ERIC Educational Resources Information Center

    Mandell, Marvin B.; Bretschneider, Stuart I.

    1984-01-01

    The authors demonstrate how exponential smoothing can play a role in the identification of the intervention component of an interrupted time-series design model that is analogous to the role that the sample autocorrelation and partial autocorrelation functions serve in the identification of the noise portion of such a model. (Author/BW)

  15. Systematic strategies for the third industrial accident prevention plan in Korea.

    PubMed

    Kang, Young-sig; Yang, Sung-hwan; Kim, Tae-gu; Kim, Day-sung

    2012-01-01

    To minimize industrial accidents, it's critical to evaluate a firm's priorities for prevention factors and strategies since such evaluation provides decisive information for preventing industrial accidents and maintaining safety management. Therefore, this paper proposes the evaluation of priorities through statistical testing of prevention factors with a cause analysis in a cause and effect model. A priority matrix criterion is proposed to apply the ranking and for the objectivity of questionnaire results. This paper used regression method (RA), exponential smoothing method (ESM), double exponential smoothing method (DESM), autoregressive integrated moving average (ARIMA) model and proposed analytical function method (PAFM) to analyze trends of accident data that will lead to an accurate prediction. This paper standardized the questionnaire results of workers and managers in manufacturing and construction companies with less than 300 employees, located in the central Korean metropolitan areas where fatal accidents have occurred. Finally, a strategy was provided to construct safety management for the third industrial accident prevention plan and a forecasting method for occupational accident rates and fatality rates for occupational accidents per 10,000 people.

  16. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  17. Lightweight filter architecture for energy efficient mobile vehicle localization based on a distributed acoustic sensor network.

    PubMed

    Kim, Keonwook

    2013-08-23

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.

  18. Pointwise convergence of derivatives of Lagrange interpolation polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Damelin, S. B.; Jung, H. S.

    2005-01-01

    For a general class of exponential weights on the line and on (-1,1), we study pointwise convergence of the derivatives of Lagrange interpolation. Our weights include even weights of smooth polynomial decay near +/-[infinity] (Freud weights), even weights of faster than smooth polynomial decay near +/-[infinity] (Erdos weights) and even weights which vanish strongly near +/-1, for example Pollaczek type weights.

  19. Nuclear counting filter based on a centered Skellam test and a double exponential smoothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coulon, Romain; Kondrasovs, Vladimir; Dumazert, Jonathan

    2015-07-01

    Online nuclear counting represents a challenge due to the stochastic nature of radioactivity. The count data have to be filtered in order to provide a precise and accurate estimation of the count rate, this with a response time compatible with the application in view. An innovative filter is presented in this paper addressing this issue. It is a nonlinear filter based on a Centered Skellam Test (CST) giving a local maximum likelihood estimation of the signal based on a Poisson distribution assumption. This nonlinear approach allows to smooth the counting signal while maintaining a fast response when brutal change activitymore » occur. The filter has been improved by the implementation of a Brown's double Exponential Smoothing (BES). The filter has been validated and compared to other state of the art smoothing filters. The CST-BES filter shows a significant improvement compared to all tested smoothing filters. (authors)« less

  20. Exponential Family Functional data analysis via a low-rank model.

    PubMed

    Li, Gen; Huang, Jianhua Z; Shen, Haipeng

    2018-05-08

    In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.

  1. The Prediction of Teacher Turnover Employing Time Series Analysis.

    ERIC Educational Resources Information Center

    Costa, Crist H.

    The purpose of this study was to combine knowledge of teacher demographic data with time-series forecasting methods to predict teacher turnover. Moving averages and exponential smoothing were used to forecast discrete time series. The study used data collected from the 22 largest school districts in Iowa, designated as FACT schools. Predictions…

  2. Lightweight Filter Architecture for Energy Efficient Mobile Vehicle Localization Based on a Distributed Acoustic Sensor Network

    PubMed Central

    Kim, Keonwook

    2013-01-01

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482

  3. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  4. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  5. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less

  6. Knowing what to expect, forecasting monthly emergency department visits: A time-series analysis.

    PubMed

    Bergs, Jochen; Heerinckx, Philipe; Verelst, Sandra

    2014-04-01

    To evaluate an automatic forecasting algorithm in order to predict the number of monthly emergency department (ED) visits one year ahead. We collected retrospective data of the number of monthly visiting patients for a 6-year period (2005-2011) from 4 Belgian Hospitals. We used an automated exponential smoothing approach to predict monthly visits during the year 2011 based on the first 5 years of the dataset. Several in- and post-sample forecasting accuracy measures were calculated. The automatic forecasting algorithm was able to predict monthly visits with a mean absolute percentage error ranging from 2.64% to 4.8%, indicating an accurate prediction. The mean absolute scaled error ranged from 0.53 to 0.68 indicating that, on average, the forecast was better compared with in-sample one-step forecast from the naïve method. The applied automated exponential smoothing approach provided useful predictions of the number of monthly visits a year in advance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Enhancement of Markov chain model by integrating exponential smoothing: A case study on Muslims marriage and divorce

    NASA Astrophysics Data System (ADS)

    Jamaluddin, Fadhilah; Rahim, Rahela Abdul

    2015-12-01

    Markov Chain has been introduced since the 1913 for the purpose of studying the flow of data for a consecutive number of years of the data and also forecasting. The important feature in Markov Chain is obtaining the accurate Transition Probability Matrix (TPM). However to obtain the suitable TPM is hard especially in involving long-term modeling due to unavailability of data. This paper aims to enhance the classical Markov Chain by introducing Exponential Smoothing technique in developing the appropriate TPM.

  8. Spectrum of Lyapunov exponents of non-smooth dynamical systems of integrate-and-fire type.

    PubMed

    Zhou, Douglas; Sun, Yi; Rangan, Aaditya V; Cai, David

    2010-04-01

    We discuss how to characterize long-time dynamics of non-smooth dynamical systems, such as integrate-and-fire (I&F) like neuronal network, using Lyapunov exponents and present a stable numerical method for the accurate evaluation of the spectrum of Lyapunov exponents for this large class of dynamics. These dynamics contain (i) jump conditions as in the firing-reset dynamics and (ii) degeneracy such as in the refractory period in which voltage-like variables of the network collapse to a single constant value. Using the networks of linear I&F neurons, exponential I&F neurons, and I&F neurons with adaptive threshold, we illustrate our method and discuss the rich dynamics of these networks.

  9. Exponential smoothing weighted correlations

    NASA Astrophysics Data System (ADS)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  10. Smoothing Polymer Surfaces by Solvent-Vapor Exposure

    NASA Astrophysics Data System (ADS)

    Anthamatten, Mitchell

    2003-03-01

    Ultra-smooth polymer surfaces are of great importance in a large body of technical applications such as optical coatings, supermirrors, waveguides, paints, and fusion targets. We are investigating a simple approach to controlling surface roughness: by temporarily swelling the polymer with solvent molecules. As the solvent penetrates into the polymer, its viscosity is lowered, and surface tension forces drive surface flattening. To investigate sorption kinetics and surface-smoothing phenomena, a series of vapor-deposited poly(amic acid) films were exposed to dimethyl sulfoxide vapors. During solvent exposure, the surface topology was continuously monitored using light interference microscopy. The resulting power spectra indicate that high-frequency defects smooth faster than low-frequency defects. This frequency dependence was studied by depositing polymer films onto a series of 2D sinusoidal surfaces and performing smoothing experiments. Results show that the amplitudes of the sinusoidal surfaces decay exponentially with solvent exposure time, and the exponential decay constants are proportional to surface frequency. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  11. Exponentially accurate approximations to piece-wise smooth periodic functions

    NASA Technical Reports Server (NTRS)

    Greer, James; Banerjee, Saheb

    1995-01-01

    A family of simple, periodic basis functions with 'built-in' discontinuities are introduced, and their properties are analyzed and discussed. Some of their potential usefulness is illustrated in conjunction with the Fourier series representations of functions with discontinuities. In particular, it is demonstrated how they can be used to construct a sequence of approximations which converges exponentially in the maximum norm to a piece-wise smooth function. The theory is illustrated with several examples and the results are discussed in the context of other sequences of functions which can be used to approximate discontinuous functions.

  12. Multivariate Time Series Forecasting of Crude Palm Oil Price Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Kanchymalay, Kasturi; Salim, N.; Sukprasert, Anupong; Krishnan, Ramesh; Raba'ah Hashim, Ummi

    2017-08-01

    The aim of this paper was to study the correlation between crude palm oil (CPO) price, selected vegetable oil prices (such as soybean oil, coconut oil, and olive oil, rapeseed oil and sunflower oil), crude oil and the monthly exchange rate. Comparative analysis was then performed on CPO price forecasting results using the machine learning techniques. Monthly CPO prices, selected vegetable oil prices, crude oil prices and monthly exchange rate data from January 1987 to February 2017 were utilized. Preliminary analysis showed a positive and high correlation between the CPO price and soy bean oil price and also between CPO price and crude oil price. Experiments were conducted using multi-layer perception, support vector regression and Holt Winter exponential smoothing techniques. The results were assessed by using criteria of root mean square error (RMSE), means absolute error (MAE), means absolute percentage error (MAPE) and Direction of accuracy (DA). Among these three techniques, support vector regression(SVR) with Sequential minimal optimization (SMO) algorithm showed relatively better results compared to multi-layer perceptron and Holt Winters exponential smoothing method.

  13. Application of Holt exponential smoothing and ARIMA method for data population in West Java

    NASA Astrophysics Data System (ADS)

    Supriatna, A.; Susanti, D.; Hertini, E.

    2017-01-01

    One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.

  14. [Selection of a SHF-plasma device for carbon dioxide and hydrogen recycling in a physical-chemical life support system].

    PubMed

    Klimarev, S I

    2003-01-01

    A waveguide SHF plasmotron was chosen for carbon dioxide and hydrogen recycling in a low-temperature plasma in the Bosch reactor. To increase electric intensity within the discharge capacitor, thickness of the waveguide thin wall was changed for 10 mm. A method for calculating the compensated exponential smooth transition to align two similar lines (waveguides) with sections of 72 x 34 mm and 72 x 10 mm to transfer SHF energies from the generator to plasma was proposed. Calculation of the smooth transition has been used in final refinement of the HSF plasmotron design as a component of a physical-chemical LSS.

  15. Intermittent Demand Forecasting in a Tertiary Pediatric Intensive Care Unit.

    PubMed

    Cheng, Chen-Yang; Chiang, Kuo-Liang; Chen, Meng-Yin

    2016-10-01

    Forecasts of the demand for medical supplies both directly and indirectly affect the operating costs and the quality of the care provided by health care institutions. Specifically, overestimating demand induces an inventory surplus, whereas underestimating demand possibly compromises patient safety. Uncertainty in forecasting the consumption of medical supplies generates intermittent demand events. The intermittent demand patterns for medical supplies are generally classified as lumpy, erratic, smooth, and slow-moving demand. This study was conducted with the purpose of advancing a tertiary pediatric intensive care unit's efforts to achieve a high level of accuracy in its forecasting of the demand for medical supplies. On this point, several demand forecasting methods were compared in terms of the forecast accuracy of each. The results confirm that applying Croston's method combined with a single exponential smoothing method yields the most accurate results for forecasting lumpy, erratic, and slow-moving demand, whereas the Simple Moving Average (SMA) method is the most suitable for forecasting smooth demand. In addition, when the classification of demand consumption patterns were combined with the demand forecasting models, the forecasting errors were minimized, indicating that this classification framework can play a role in improving patient safety and reducing inventory management costs in health care institutions.

  16. Nonholonomic stability aspects of piecewise holonomic systems

    NASA Astrophysics Data System (ADS)

    Ruina, Andy

    1998-10-01

    We consider mechanical systems with intermittent contact that are smooth and holonomic except at the instants of transition. Overall such systems can be nonholonomic in that the accessible configuration space can have larger dimension than the instantaneous motions allowed by the constraints. The known examples of such mechanical systems are also dissipative. By virtue of their nonholonomy and of their dissipation such systems are not Hamiltonian. Thus there is no reason to expect them to adhere to the Hamiltonian property that exponential stability of steady motions is impossible. Since nonholonomy and energy dissipation are simultaneously present in these systems, it is usually not clear whether their sometimes-observed exponential stability should be attributed solely to dissipation, to nonholonomy, or to both. However, it is shown here on the basis of one simple example, that the observed exponential stability of such systems can follow solely from the nonholonomic nature of intermittent contact and not from dissipation. In particular, it is shown that a discrete sister model of the Chaplygin sleigh, a rigid body on the plane constrained by one skate, inherits the stability eigenvalues of the smooth system even as the dissipation tends to zero. Thus it seems that discrete nonholonomy can contribute to exponential stability of mechanical systems.

  17. Trend and forecasting rate of cancer deaths at a public university hospital using univariate modeling

    NASA Astrophysics Data System (ADS)

    Ismail, A.; Hassan, Noor I.

    2013-09-01

    Cancer is one of the principal causes of death in Malaysia. This study was performed to determine the pattern of rate of cancer deaths at a public hospital in Malaysia over an 11 year period from year 2001 to 2011, to determine the best fitted model of forecasting the rate of cancer deaths using Univariate Modeling and to forecast the rates for the next two years (2012 to 2013). The medical records of the death of patients with cancer admitted at this Hospital over 11 year's period were reviewed, with a total of 663 cases. The cancers were classified according to 10th Revision International Classification of Diseases (ICD-10). Data collected include socio-demographic background of patients such as registration number, age, gender, ethnicity, ward and diagnosis. Data entry and analysis was accomplished using SPSS 19.0 and Minitab 16.0. The five Univariate Models used were Naïve with Trend Model, Average Percent Change Model (ACPM), Single Exponential Smoothing, Double Exponential Smoothing and Holt's Method. The overall 11 years rate of cancer deaths showed that at this hospital, Malay patients have the highest percentage (88.10%) compared to other ethnic groups with males (51.30%) higher than females. Lung and breast cancer have the most number of cancer deaths among gender. About 29.60% of the patients who died due to cancer were aged 61 years old and above. The best Univariate Model used for forecasting the rate of cancer deaths is Single Exponential Smoothing Technique with alpha of 0.10. The forecast for the rate of cancer deaths shows a horizontally or flat value. The forecasted mortality trend remains at 6.84% from January 2012 to December 2013. All the government and private sectors and non-governmental organizations need to highlight issues on cancer especially lung and breast cancers to the public through campaigns using mass media, media electronics, posters and pamphlets in the attempt to decrease the rate of cancer deaths in Malaysia.

  18. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-01-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0 . Furthermore, we prove the global existence and uniqueness of C^{α ,β } -solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1 -space. The exponential convergence rate is also derived.

  19. Existence and Stability of Traveling Waves for Degenerate Reaction-Diffusion Equation with Time Delay

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Jin, Chunhua; Mei, Ming; Yin, Jingxue

    2018-06-01

    This paper deals with the existence and stability of traveling wave solutions for a degenerate reaction-diffusion equation with time delay. The degeneracy of spatial diffusion together with the effect of time delay causes us the essential difficulty for the existence of the traveling waves and their stabilities. In order to treat this case, we first show the existence of smooth- and sharp-type traveling wave solutions in the case of c≥c^* for the degenerate reaction-diffusion equation without delay, where c^*>0 is the critical wave speed of smooth traveling waves. Then, as a small perturbation, we obtain the existence of the smooth non-critical traveling waves for the degenerate diffusion equation with small time delay τ >0. Furthermore, we prove the global existence and uniqueness of C^{α ,β }-solution to the time-delayed degenerate reaction-diffusion equation via compactness analysis. Finally, by the weighted energy method, we prove that the smooth non-critical traveling wave is globally stable in the weighted L^1-space. The exponential convergence rate is also derived.

  20. Non-extensive quantum statistics with particle-hole symmetry

    NASA Astrophysics Data System (ADS)

    Biró, T. S.; Shen, K. M.; Zhang, B. W.

    2015-06-01

    Based on Tsallis entropy (1988) and the corresponding deformed exponential function, generalized distribution functions for bosons and fermions have been used since a while Teweldeberhan et al. (2003) and Silva et al. (2010). However, aiming at a non-extensive quantum statistics further requirements arise from the symmetric handling of particles and holes (excitations above and below the Fermi level). Naive replacements of the exponential function or "cut and paste" solutions fail to satisfy this symmetry and to be smooth at the Fermi level at the same time. We solve this problem by a general ansatz dividing the deformed exponential to odd and even terms and demonstrate that how earlier suggestions, like the κ- and q-exponential behave in this respect.

  1. Upper arm circumference development in Chinese children and adolescents: a pooled analysis.

    PubMed

    Tong, Fang; Fu, Tong

    2015-05-30

    Upper arm development in children is different in different ethnic groups. There have been few reports on upper arm circumference (UAC) at different stages of development in children and adolescents in China. The purpose of this study was to provide a reference for growth with weighted assessment of the overall level of development. Using a pooled analysis, an authoritative journal database search and reports of UAC, we created a new database on developmental measures in children. In conducting a weighted analysis, we compared reference values for 0~60 months of development according to the World Health Organization (WHO) statistics considering gender and nationality and used Z values as interval values for the second sampling to obtain an exponential smooth curve to analyze the mean, standard deviation, and sites of attachment. Ten articles were included in the pooled analysis, and these articles included participants from different areas of China. The point of intersection with the WHO curve was 3.5 years with higher values at earlier ages and lower values at older ages. Boys curve was steeper after puberty. The curves in the studies had a merged line compatible. The Z values of exponential smoothing showed the curves were similar for body weight and had a right normal distribution. The integrated index of UAC in Chinese children and adolescents indicated slightly variations with regions. Exponential curve smoothing was suitable for assessment at different developmental stages.

  2. Forecasting Inflow and Outflow of Money Currency in East Java Using a Hybrid Exponential Smoothing and Calendar Variation Model

    NASA Astrophysics Data System (ADS)

    Susanti, Ana; Suhartono; Jati Setyadi, Hario; Taruk, Medi; Haviluddin; Pamilih Widagdo, Putut

    2018-03-01

    Money currency availability in Bank Indonesia can be examined by inflow and outflow of money currency. The objective of this research is to forecast the inflow and outflow of money currency in each Representative Office (RO) of BI in East Java by using a hybrid exponential smoothing based on state space approach and calendar variation model. Hybrid model is expected to generate more accurate forecast. There are two studies that will be discussed in this research. The first studies about hybrid model using simulation data that contain pattern of trends, seasonal and calendar variation. The second studies about the application of a hybrid model for forecasting the inflow and outflow of money currency in each RO of BI in East Java. The first of results indicate that exponential smoothing model can not capture the pattern calendar variation. It results RMSE values 10 times standard deviation of error. The second of results indicate that hybrid model can capture the pattern of trends, seasonal and calendar variation. It results RMSE values approaching the standard deviation of error. In the applied study, the hybrid model give more accurate forecast for five variables : the inflow of money currency in Surabaya, Malang, Jember and outflow of money currency in Surabaya and Kediri. Otherwise, the time series regression model yields better for three variables : outflow of money currency in Malang, Jember and inflow of money currency in Kediri.

  3. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  4. State-space forecasting of Schistosoma haematobium time-series in Niono, Mali.

    PubMed

    Medina, Daniel C; Findley, Sally E; Doumbia, Seydou

    2008-08-13

    Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.-which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively-is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. In this longitudinal retrospective (01/1996-06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state-space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. The exponential smoothing state-space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium-induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel.

  5. State–Space Forecasting of Schistosoma haematobium Time-Series in Niono, Mali

    PubMed Central

    Medina, Daniel C.; Findley, Sally E.; Doumbia, Seydou

    2008-01-01

    Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with infectious diseases. The incidence of Schistosoma sp.—which are neglected tropical diseases exposing and infecting more than 500 and 200 million individuals in 77 countries, respectively—is rising because of 1) numerous irrigation and hydro-electric projects, 2) steady shifts from nomadic to sedentary existence, and 3) ineffective control programs. Notwithstanding the colossal scope of these parasitic infections, less than 0.5% of Schistosoma sp. investigations have attempted to predict their spatial and or temporal distributions. Undoubtedly, public health programs in developing countries could benefit from parsimonious forecasting and early warning systems to enhance management of these parasitic diseases. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, the Schistosoma haematobium time-series for the district of Niono, Mali, was fitted with general-purpose exponential smoothing methods to generate contemporaneous on-line forecasts. These methods, which are encapsulated within a state–space framework, accommodate seasonal and inter-annual time-series fluctuations. Mean absolute percentage error values were circa 25% for 1- to 5-month horizon forecasts. Conclusions/Significance The exponential smoothing state–space framework employed herein produced reasonably accurate forecasts for this time-series, which reflects the incidence of S. haematobium–induced terminal hematuria. It obliquely captured prior non-linear interactions between disease dynamics and exogenous covariates (e.g., climate, irrigation, and public health interventions), thus obviating the need for more complex forecasting methods in the district of Niono, Mali. Therefore, this framework could assist with managing and assessing S. haematobium transmission and intervention impact, respectively, in this district and potentially elsewhere in the Sahel. PMID:18698361

  6. A Numerical, Literal, and Converged Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Wiesel, William E.

    2017-09-01

    The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.

  7. Efficient spectral-Galerkin algorithms for direct solution for second-order differential equations using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.

    2006-06-01

    It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.

  8. Radiofrequency in cosmetic dermatology.

    PubMed

    Beasley, Karen L; Weiss, Robert A

    2014-01-01

    The demand for noninvasive methods of facial and body rejuvenation has experienced exponential growth over the last decade. There is a particular interest in safe and effective ways to decrease skin laxity and smooth irregular body contours and texture without downtime. These noninvasive treatments are being sought after because less time for recovery means less time lost from work and social endeavors. Radiofrequency (RF) treatments are traditionally titrated to be nonablative and are optimal for those wishing to avoid recovery time. Not only is there minimal recovery but also a high level of safety with aesthetic RF treatments. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. How exponential are FREDs?

    NASA Astrophysics Data System (ADS)

    Schaefer, Bradley E.; Dyson, Samuel E.

    1996-08-01

    A common Gamma-Ray Burst-light curve shape is the ``FRED'' or ``fast-rise exponential-decay.'' But how exponential is the tail? Are they merely decaying with some smoothly decreasing decline rate, or is the functional form an exponential to within the uncertainties? If the shape really is an exponential, then it would be reasonable to assign some physically significant time scale to the burst. That is, there would have to be some specific mechanism that produces the characteristic decay profile. So if an exponential is found, then we will know that the decay light curve profile is governed by one mechanism (at least for simple FREDs) instead of by complex/multiple mechanisms. As such, a specific number amenable to theory can be derived for each FRED. We report on the fitting of exponentials (and two other shapes) to the tails of ten bright BATSE bursts. The BATSE trigger numbers are 105, 257, 451, 907, 1406, 1578, 1883, 1885, 1989, and 2193. Our technique was to perform a least square fit to the tail from some time after peak until the light curve approaches background. We find that most FREDs are not exponentials, although a few come close. But since the other candidate shapes come close just as often, we conclude that the FREDs are misnamed.

  10. Failure prediction using machine learning and time series in optical network.

    PubMed

    Wang, Zhilong; Zhang, Min; Wang, Danshi; Song, Chuang; Liu, Min; Li, Jin; Lou, Liqi; Liu, Zhuo

    2017-08-07

    In this paper, we propose a performance monitoring and failure prediction method in optical networks based on machine learning. The primary algorithms of this method are the support vector machine (SVM) and double exponential smoothing (DES). With a focus on risk-aware models in optical networks, the proposed protection plan primarily investigates how to predict the risk of an equipment failure. To the best of our knowledge, this important problem has not yet been fully considered. Experimental results showed that the average prediction accuracy of our method was 95% when predicting the optical equipment failure state. This finding means that our method can forecast an equipment failure risk with high accuracy. Therefore, our proposed DES-SVM method can effectively improve traditional risk-aware models to protect services from possible failures and enhance the optical network stability.

  11. Amplitude, Latency, and Peak Velocity in Accommodation and Disaccommodation Dynamics

    PubMed Central

    Papadatou, Eleni; Ferrer-Blasco, Teresa; Montés-Micó, Robert

    2017-01-01

    The aim of this work was to ascertain whether there are differences in amplitude, latency, and peak velocity of accommodation and disaccommodation responses when different analysis strategies are used to compute them, such as fitting different functions to the responses or for smoothing them prior to computing the parameters. Accommodation and disaccommodation responses from four subjects to pulse changes in demand were recorded by means of aberrometry. Three different strategies were followed to analyze such responses: fitting an exponential function to the experimental data; fitting a Boltzmann sigmoid function to the data; and smoothing the data. Amplitude, latency, and peak velocity of the responses were extracted. Significant differences were found between the peak velocity in accommodation computed by fitting an exponential function and smoothing the experimental data (mean difference 2.36 D/s). Regarding disaccommodation, significant differences were found between latency and peak velocity, calculated with the two same strategies (mean difference of 0.15 s and −3.56 D/s, resp.). The strategy used to analyze accommodation and disaccommodation responses seems to affect the parameters that describe accommodation and disaccommodation dynamics. These results highlight the importance of choosing the most adequate analysis strategy in each individual to obtain the parameters that characterize accommodation and disaccommodation dynamics. PMID:29226128

  12. Forecasting electricity usage using univariate time series models

    NASA Astrophysics Data System (ADS)

    Hock-Eam, Lim; Chee-Yin, Yip

    2014-12-01

    Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.

  13. Evidence of the Exponential Decay Emission in the Swift Gamma-ray Bursts

    NASA Technical Reports Server (NTRS)

    Sakamoto, T.; Sato, G.; Hill, J.E.; Krimm, H.A.; Yamazaki, R.; Takami, K.; Swindell, S.; Osborne, J.P.

    2007-01-01

    We present a systematic study of the steep decay emission of gamma-ray bursts (GRBs) observed by the Swift X-Ray Telescope (XRT). In contrast to the analysis in recent literature, instead of extrapolating the data of Burst Alert Telescope (BAT) down into the XRT energy range, we extrapolated the XRT data up to the BAT energy range, 15-25 keV, to produce the BAT and XRT composite light curve. Based on our composite light curve fitting, we have confirmed the existence of an exponential decay component which smoothly connects the BAT prompt data to the XRT steep decay for several GRBs. We also find that the XRT steep decay for some of the bursts can be well fitted by a combination of a power-law with an exponential decay model. We discuss that this exponential component may be the emission from an external shock and a sign of the deceleration of the outflow during the prompt phase.

  14. Inventory Management for Irregular Shipment of Goods in Distribution Centre

    NASA Astrophysics Data System (ADS)

    Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun

    2016-01-01

    The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.

  15. Decay of random correlation functions for unimodal maps

    NASA Astrophysics Data System (ADS)

    Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique

    2000-10-01

    Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.

  16. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  17. Economic growth and CO2 emissions: an investigation with smooth transition autoregressive distributed lag models for the 1800-2014 period in the USA.

    PubMed

    Bildirici, Melike; Ersin, Özgür Ömer

    2018-01-01

    The study aims to combine the autoregressive distributed lag (ARDL) cointegration framework with smooth transition autoregressive (STAR)-type nonlinear econometric models for causal inference. Further, the proposed STAR distributed lag (STARDL) models offer new insights in terms of modeling nonlinearity in the long- and short-run relations between analyzed variables. The STARDL method allows modeling and testing nonlinearity in the short-run and long-run parameters or both in the short- and long-run relations. To this aim, the relation between CO 2 emissions and economic growth rates in the USA is investigated for the 1800-2014 period, which is one of the largest data sets available. The proposed hybrid models are the logistic, exponential, and second-order logistic smooth transition autoregressive distributed lag (LSTARDL, ESTARDL, and LSTAR2DL) models combine the STAR framework with nonlinear ARDL-type cointegration to augment the linear ARDL approach with smooth transitional nonlinearity. The proposed models provide a new approach to the relevant econometrics and environmental economics literature. Our results indicated the presence of asymmetric long-run and short-run relations between the analyzed variables that are from the GDP towards CO 2 emissions. By the use of newly proposed STARDL models, the results are in favor of important differences in terms of the response of CO 2 emissions in regimes 1 and 2 for the estimated LSTAR2DL and LSTARDL models.

  18. Crime prediction modeling

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.

  19. Forecasting daily patient volumes in the emergency department.

    PubMed

    Jones, Spencer S; Thomas, Alun; Evans, R Scott; Welch, Shari J; Haug, Peter J; Snow, Gregory L

    2008-02-01

    Shifts in the supply of and demand for emergency department (ED) resources make the efficient allocation of ED resources increasingly important. Forecasting is a vital activity that guides decision-making in many areas of economic, industrial, and scientific planning, but has gained little traction in the health care industry. There are few studies that explore the use of forecasting methods to predict patient volumes in the ED. The goals of this study are to explore and evaluate the use of several statistical forecasting methods to predict daily ED patient volumes at three diverse hospital EDs and to compare the accuracy of these methods to the accuracy of a previously proposed forecasting method. Daily patient arrivals at three hospital EDs were collected for the period January 1, 2005, through March 31, 2007. The authors evaluated the use of seasonal autoregressive integrated moving average, time series regression, exponential smoothing, and artificial neural network models to forecast daily patient volumes at each facility. Forecasts were made for horizons ranging from 1 to 30 days in advance. The forecast accuracy achieved by the various forecasting methods was compared to the forecast accuracy achieved when using a benchmark forecasting method already available in the emergency medicine literature. All time series methods considered in this analysis provided improved in-sample model goodness of fit. However, post-sample analysis revealed that time series regression models that augment linear regression models by accounting for serial autocorrelation offered only small improvements in terms of post-sample forecast accuracy, relative to multiple linear regression models, while seasonal autoregressive integrated moving average, exponential smoothing, and artificial neural network forecasting models did not provide consistently accurate forecasts of daily ED volumes. This study confirms the widely held belief that daily demand for ED services is characterized by seasonal and weekly patterns. The authors compared several time series forecasting methods to a benchmark multiple linear regression model. The results suggest that the existing methodology proposed in the literature, multiple linear regression based on calendar variables, is a reasonable approach to forecasting daily patient volumes in the ED. However, the authors conclude that regression-based models that incorporate calendar variables, account for site-specific special-day effects, and allow for residual autocorrelation provide a more appropriate, informative, and consistently accurate approach to forecasting daily ED patient volumes.

  20. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Revie, Crawford W.; Sanchez, Javier

    2013-01-01

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt–Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel. PMID:23576782

  1. Syndromic surveillance using veterinary laboratory data: data pre-processing and algorithm performance evaluation.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Revie, Crawford W; Sanchez, Javier

    2013-06-06

    Diagnostic test orders to an animal laboratory were explored as a data source for monitoring trends in the incidence of clinical syndromes in cattle. Four years of real data and over 200 simulated outbreak signals were used to compare pre-processing methods that could remove temporal effects in the data, as well as temporal aberration detection algorithms that provided high sensitivity and specificity. Weekly differencing demonstrated solid performance in removing day-of-week effects, even in series with low daily counts. For aberration detection, the results indicated that no single algorithm showed performance superior to all others across the range of outbreak scenarios simulated. Exponentially weighted moving average charts and Holt-Winters exponential smoothing demonstrated complementary performance, with the latter offering an automated method to adjust to changes in the time series that will likely occur in the future. Shewhart charts provided lower sensitivity but earlier detection in some scenarios. Cumulative sum charts did not appear to add value to the system; however, the poor performance of this algorithm was attributed to characteristics of the data monitored. These findings indicate that automated monitoring aimed at early detection of temporal aberrations will likely be most effective when a range of algorithms are implemented in parallel.

  2. Forecasting Performance of Grey Prediction for Education Expenditure and School Enrollment

    ERIC Educational Resources Information Center

    Tang, Hui-Wen Vivian; Yin, Mu-Shang

    2012-01-01

    GM(1,1) and GM(1,1) rolling models derived from grey system theory were estimated using time-series data from projection studies by National Center for Education Statistics (NCES). An out-of-sample forecasting competition between the two grey prediction models and exponential smoothing used by NCES was conducted for education expenditure and…

  3. Penalized nonparametric scalar-on-function regression via principal coordinates

    PubMed Central

    Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu

    2016-01-01

    A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963

  4. [Shock shape representation of sinus heart rate based on cloud model].

    PubMed

    Yin, Wenfeng; Zhao, Jie; Chen, Tiantian; Zhang, Junjian; Zhang, Chunyou; Li, Dapeng; An, Baijing

    2014-04-01

    The present paper is to analyze the trend of sinus heart rate RR interphase sequence after a single ventricular premature beat and to compare it with the two parameters, turbulence onset (TO) and turbulence slope (TS). Based on the acquisition of sinus rhythm concussion sample, we in this paper use a piecewise linearization method to extract its linear characteristics, following which we describe shock form with natural language through cloud model. In the process of acquisition, we use the exponential smoothing method to forecast the position where QRS wave may appear to assist QRS wave detection, and use template to judge whether current cardiac is sinus rhythm. And we choose some signals from MIT-BIH Arrhythmia Database to detect whether the algorithm is effective in Matlab. The results show that our method can correctly detect the changing trend of sinus heart rate. The proposed method can achieve real-time detection of sinus rhythm shocks, which is simple and easily implemented, so that it is effective as a supplementary method.

  5. An extension of the finite cell method using boolean operations

    NASA Astrophysics Data System (ADS)

    Abedian, Alireza; Düster, Alexander

    2017-05-01

    In the finite cell method, the fictitious domain approach is combined with high-order finite elements. The geometry of the problem is taken into account by integrating the finite cell formulation over the physical domain to obtain the corresponding stiffness matrix and load vector. In this contribution, an extension of the FCM is presented wherein both the physical and fictitious domain of an element are simultaneously evaluated during the integration. In the proposed extension of the finite cell method, the contribution of the stiffness matrix over the fictitious domain is subtracted from the cell, resulting in the desired stiffness matrix which reflects the contribution of the physical domain only. This method results in an exponential rate of convergence for porous domain problems with a smooth solution and accurate integration. In addition, it reduces the computational cost, especially when applying adaptive integration schemes based on the quadtree/octree. Based on 2D and 3D problems of linear elastostatics, numerical examples serve to demonstrate the efficiency and accuracy of the proposed method.

  6. A spectral boundary integral equation method for the 2-D Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.

  7. Galaxy Zoo: evidence for diverse star formation histories through the green valley

    NASA Astrophysics Data System (ADS)

    Smethurst, R. J.; Lintott, C. J.; Simmons, B. D.; Schawinski, K.; Marshall, P. J.; Bamford, S.; Fortson, L.; Kaviraj, S.; Masters, K. L.; Melvin, T.; Nichol, R. C.; Skibba, R. A.; Willett, K. W.

    2015-06-01

    Does galaxy evolution proceed through the green valley via multiple pathways or as a single population? Motivated by recent results highlighting radically different evolutionary pathways between early- and late-type galaxies, we present results from a simple Bayesian approach to this problem wherein we model the star formation history (SFH) of a galaxy with two parameters, [t, τ] and compare the predicted and observed optical and near-ultraviolet colours. We use a novel method to investigate the morphological differences between the most probable SFHs for both disc-like and smooth-like populations of galaxies, by using a sample of 126 316 galaxies (0.01 < z < 0.25) with probabilistic estimates of morphology from Galaxy Zoo. We find a clear difference between the quenching time-scales preferred by smooth- and disc-like galaxies, with three possible routes through the green valley dominated by smooth- (rapid time-scales, attributed to major mergers), intermediate- (intermediate time-scales, attributed to minor mergers and galaxy interactions) and disc-like (slow time-scales, attributed to secular evolution) galaxies. We hypothesize that morphological changes occur in systems which have undergone quenching with an exponential time-scale τ < 1.5 Gyr, in order for the evolution of galaxies in the green valley to match the ratio of smooth to disc galaxies observed in the red sequence. These rapid time-scales are instrumental in the formation of the red sequence at earlier times; however, we find that galaxies currently passing through the green valley typically do so at intermediate time-scales.†

  8. Prediction of a service demand using combined forecasting approach

    NASA Astrophysics Data System (ADS)

    Zhou, Ling

    2017-08-01

    Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.

  9. Galileon bounce after ekpyrotic contraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osipov, M.; Rubakov, V., E-mail: osipov@ms2.inr.ac.ru, E-mail: rubakov@ms2.inr.ac.ru

    We consider a simple cosmological model that includes a long ekpyrotic contraction stage and smooth bounce after it. Ekpyrotic behavior is due to a scalar field with a negative exponential potential, whereas the Galileon field produces bounce. We give an analytical picture of how the bounce occurs within the weak gravity regime, and then perform numerical analysis to extend our results to a non-perturbative regime.

  10. An Optimization of Inventory Demand Forecasting in University Healthcare Centre

    NASA Astrophysics Data System (ADS)

    Bon, A. T.; Ng, T. K.

    2017-01-01

    Healthcare industry becomes an important field for human beings nowadays as it concerns about one’s health. With that, forecasting demand for health services is an important step in managerial decision making for all healthcare organizations. Hence, a case study was conducted in University Health Centre to collect historical demand data of Panadol 650mg for 68 months from January 2009 until August 2014. The aim of the research is to optimize the overall inventory demand through forecasting techniques. Quantitative forecasting or time series forecasting model was used in the case study to forecast future data as a function of past data. Furthermore, the data pattern needs to be identified first before applying the forecasting techniques. Trend is the data pattern and then ten forecasting techniques are applied using Risk Simulator Software. Lastly, the best forecasting techniques will be find out with the least forecasting error. Among the ten forecasting techniques include single moving average, single exponential smoothing, double moving average, double exponential smoothing, regression, Holt-Winter’s additive, Seasonal additive, Holt-Winter’s multiplicative, seasonal multiplicative and Autoregressive Integrated Moving Average (ARIMA). According to the forecasting accuracy measurement, the best forecasting technique is regression analysis.

  11. Finite-temperature effects in helical quantum turbulence

    NASA Astrophysics Data System (ADS)

    Clark Di Leoni, Patricio; Mininni, Pablo D.; Brachet, Marc E.

    2018-04-01

    We perform a study of the evolution of helical quantum turbulence at different temperatures by solving numerically the Gross-Pitaevskii and the stochastic Ginzburg-Landau equations, using up to 40963 grid points with a pseudospectral method. We show that for temperatures close to the critical one, the fluid described by these equations can act as a classical viscous flow, with the decay of the incompressible kinetic energy and the helicity becoming exponential. The transition from this behavior to the one observed at zero temperature is smooth as a function of temperature. Moreover, the presence of strong thermal effects can inhibit the development of a proper turbulent cascade. We provide Ansätze for the effective viscosity and friction as a function of the temperature.

  12. A simulation-based approach for solving assembly line balancing problem

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu

    2017-09-01

    Assembly line balancing problem is directly related to the production efficiency, since the last century, the problem of assembly line balancing was discussed and still a lot of people are studying on this topic. In this paper, the problem of assembly line is studied by establishing the mathematical model and simulation. Firstly, the model of determing the smallest production beat under certain work station number is anysized. Based on this model, the exponential smoothing approach is applied to improve the the algorithm efficiency. After the above basic work, the gas stirling engine assembly line balancing problem is discussed as a case study. Both two algorithms are implemented using the Lingo programming environment and the simulation results demonstrate the validity of the new methods.

  13. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  14. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery.

    PubMed

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L; Siewerdsen, Jeffrey H

    2016-11-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.

  15. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L.

    2016-01-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. PMID:27295656

  16. Nonlinear analogue of the May−Wigner instability transition

    PubMed Central

    Fyodorov, Yan V.; Khoruzhenko, Boris A.

    2016-01-01

    We study a system of N≫1 degrees of freedom coupled via a smooth homogeneous Gaussian vector field with both gradient and divergence-free components. In the absence of coupling, the system is exponentially relaxing to an equilibrium with rate μ. We show that, while increasing the ratio of the coupling strength to the relaxation rate, the system experiences an abrupt transition from a topologically trivial phase portrait with a single equilibrium into a topologically nontrivial regime characterized by an exponential number of equilibria, the vast majority of which are expected to be unstable. It is suggested that this picture provides a global view on the nature of the May−Wigner instability transition originally discovered by local linear stability analysis. PMID:27274077

  17. Vortex equations: Singularities, numerical solution, and axisymmetric vortex breakdown

    NASA Technical Reports Server (NTRS)

    Bossel, H. H.

    1972-01-01

    A method of weighted residuals for the computation of rotationally symmetric quasi-cylindrical viscous incompressible vortex flow is presented and used to compute a wide variety of vortex flows. The method approximates the axial velocity and circulation profiles by series of exponentials having (N + 1) and N free parameters, respectively. Formal integration results in a set of (2N + 1) ordinary differential equations for the free parameters. The governing equations are shown to have an infinite number of discrete singularities corresponding to critical values of the swirl parameters. The computations point to the controlling influence of the inner core flow on vortex behavior. They also confirm the existence of two particular critical swirl parameter values: one separates vortex flow which decays smoothly from vortex flow which eventually breaks down, and the second is the first singularity of the quasi-cylindrical system, at which point physical vortex breakdown is thought to occur.

  18. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  19. The Feasibility of Using Computer and Internet in Teaching Family Education for the 8th Grade Class

    ERIC Educational Resources Information Center

    Alluhaydan, Nuwayyir Saleh F.

    2016-01-01

    This paper is just a sample template for the prospective authors of IISTE. Over the decades, the concepts of holons and holonic systems have been adopted in many research fields, but they are scarcely attempted on labour planning. A literature gap exists, thus motivating the author to come up with a holonic model that uses exponential smoothing to…

  20. Computation of aerodynamic interference effects on oscillating airfoils with controls in ventilated subsonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Golberg, M. A.

    1979-01-01

    Lift interference effects are discussed based on Bland's (1968) integral equation. A mathematical existence theory is utilized for which convergence of the numerical method has been proved for general (square-integrable) downwashes. Airloads are computed using orthogonal airfoil polynomial pairs in conjunction with a collocation method which is numerically equivalent to Galerkin's method and complex least squares. Convergence exhibits exponentially decreasing error with the number n of collocation points for smooth downwashes, whereas errors are proportional to 1/n for discontinuous downwashes. The latter can be reduced to 1/n to the m+1 power with mth-order Richardson extrapolation (by using m = 2, hundredfold error reductions were obtained with only a 13% increase of computer time). Numerical results are presented showing acoustic resonance, as well as the effect of Mach number, ventilation, height-to-chord ratio, and mode shape on wind-tunnel interference. Excellent agreement with experiment is obtained in steady flow, and good agreement is obtained for unsteady flow.

  1. Traveling wavefront solutions to nonlinear reaction-diffusion-convection equations

    NASA Astrophysics Data System (ADS)

    Indekeu, Joseph O.; Smets, Ruben

    2017-08-01

    Physically motivated modified Fisher equations are studied in which nonlinear convection and nonlinear diffusion is allowed for besides the usual growth and spread of a population. It is pointed out that in a large variety of cases separable functions in the form of exponentially decaying sharp wavefronts solve the differential equation exactly provided a co-moving point source or sink is active at the wavefront. The velocity dispersion and front steepness may differ from those of some previously studied exact smooth traveling wave solutions. For an extension of the reaction-diffusion-convection equation, featuring a memory effect in the form of a maturity delay for growth and spread, also smooth exact wavefront solutions are obtained. The stability of the solutions is verified analytically and numerically.

  2. Large and small-scale structures and the dust energy balance problem in spiral galaxies

    NASA Astrophysics Data System (ADS)

    Saftly, W.; Baes, M.; De Geyter, G.; Camps, P.; Renaud, F.; Guedes, J.; De Looze, I.

    2015-04-01

    The interstellar dust content in galaxies can be traced in extinction at optical wavelengths, or in emission in the far-infrared. Several studies have found that radiative transfer models that successfully explain the optical extinction in edge-on spiral galaxies generally underestimate the observed FIR/submm fluxes by a factor of about three. In order to investigate this so-called dust energy balance problem, we use two Milky Way-like galaxies produced by high-resolution hydrodynamical simulations. We create mock optical edge-on views of these simulated galaxies (using the radiative transfer code SKIRT), and we then fit the parameters of a basic spiral galaxy model to these images (using the fitting code FitSKIRT). The basic model includes smooth axisymmetric distributions along a Sérsic bulge and exponential disc for the stars, and a second exponential disc for the dust. We find that the dust mass recovered by the fitted models is about three times smaller than the known dust mass of the hydrodynamical input models. This factor is in agreement with previous energy balance studies of real edge-on spiral galaxies. On the other hand, fitting the same basic model to less complex input models (e.g. a smooth exponential disc with a spiral perturbation or with random clumps), does recover the dust mass of the input model almost perfectly. Thus it seems that the complex asymmetries and the inhomogeneous structure of real and hydrodynamically simulated galaxies are a lot more efficient at hiding dust than the rather contrived geometries in typical quasi-analytical models. This effect may help explain the discrepancy between the dust emission predicted by radiative transfer models and the observed emission in energy balance studies for edge-on spiral galaxies.

  3. A 20-year period of orthotopic liver transplantation activity in a single center: a time series analysis performed using the R Statistical Software.

    PubMed

    Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U

    2009-05-01

    In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.

  4. Elastic wave generated by granular impact on rough and erodible surfaces

    NASA Astrophysics Data System (ADS)

    Bachelet, Vincent; Mangeney, Anne; de Rosny, Julien; Toussaint, Renaud; Farin, Maxime

    2018-01-01

    The elastic waves generated by impactors hitting rough and erodible surfaces are studied. For this purpose, beads of variable materials, diameters, and velocities are dropped on (i) a smooth PMMA plate, (ii) stuck glass beads on the PMMA plate to create roughness, and (iii) the rough plate covered with layers of free particles to investigate erodible beds. The Hertz model validity to describe impacts on a smooth surface is confirmed. For rough and erodible surfaces, an empirical scaling law that relates the elastic energy to the radius Rb and normal velocity Vz of the impactor is deduced from experimental data. In addition, the radiated elastic energy is found to decrease exponentially with respect to the bed thickness. Lastly, we show that the variability of the elastic energy among shocks increases from some percents to 70% between smooth and erodible surfaces. This work is a first step to better quantify seismic emissions of rock impacts in natural environment, in particular on unconsolidated soils.

  5. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  6. Pursuit Latency for Chromatic Targets

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Ellis, Stephen R. (Technical Monitor)

    1998-01-01

    The temporal dynamics of eye movement response to a change in direction of stimulus motion has been used to compare the processing speeds of different types of stimuli (Mulligan, ARVO '97). In this study, the pursuit response to colored targets was measured to test the hypothesis that the slow response of the chromatic system (as measured using traditional temporal sensitivity measures such as contrast sensitivity) results in increased eye movement latencies. Subjects viewed a small (0.4 deg) Gaussian spot which moved downward at a speed of 6.6 deg/sec. At a variable time during the trajectory, the dot's direction of motion changed by 30 degrees, either to the right or left. Subjects were instructed to pursue the spot. Eye movements were measured using a video ophthalmoscope with an angular resolution of approximately 1 arc min and a temporal sampling rate of 60 Hz. Stimuli were modulated in chrominance for a variety of hue directions, combined with a range of small luminance increments and decrements, to insure that some of the stimuli fell in the subjects' equiluminance planes. The smooth portions of the resulting eye movement traces were fit by convolving the stimulus velocity with an exponential having variable onset latency, time constant and amplitude. Smooth eye movements with few saccades were observed for all stimuli. Pursuit responses to stimuli having a significant luminance component are well-fit by exponentials having latencies and time constants on the order of 100 msec. Increases in pursuit response latency on the order of 100-200 msec are observed in response to certain stimuli, which occur in pairs of complementary hues, corresponding to the intersection of the stimulus section with the subjects' equiluminant plane. Smooth eye movements can be made in response to purely chromatic stimuli, but are slower than responses to stimuli with a luminance component.

  7. Enhanced Response Time of Electrowetting Lenses with Shaped Input Voltage Functions.

    PubMed

    Supekar, Omkar D; Zohrabi, Mo; Gopinath, Juliet T; Bright, Victor M

    2017-05-16

    Adaptive optical lenses based on the electrowetting principle are being rapidly implemented in many applications, such as microscopy, remote sensing, displays, and optical communication. To characterize the response of these electrowetting lenses, the dependence upon direct current (DC) driving voltage functions was investigated in a low-viscosity liquid system. Cylindrical lenses with inner diameters of 2.45 and 3.95 mm were used to characterize the dynamic behavior of the liquids under DC voltage electrowetting actuation. With the increase of the rise time of the input exponential driving voltage, the originally underdamped system response can be damped, enabling a smooth response from the lens. We experimentally determined the optimal rise times for the fastest response from the lenses. We have also performed numerical simulations of the lens actuation with input exponential driving voltage to understand the variation in the dynamics of the liquid-liquid interface with various input rise times. We further enhanced the response time of the devices by shaping the input voltage function with multiple exponential rise times. For the 3.95 mm inner diameter lens, we achieved a response time improvement of 29% when compared to the fastest response obtained using single-exponential driving voltage. The technique shows great promise for applications that require fast response times.

  8. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    PubMed

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  9. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  10. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  11. Practical remarks on the heart rate and saturation measurement methodology

    NASA Astrophysics Data System (ADS)

    Kowal, M.; Kubal, S.; Piotrowski, P.; Staniec, K.

    2017-05-01

    A surface reflection-based method for measuring heart rate and saturation has been introduced as one having a significant advantage over legacy methods in that it lends itself for use in special applications such as those where a person’s mobility is of prime importance (e.g. during a miner’s work) and excluding the use of traditional clips. Then, a complete ATmega1281-based microcontroller platform has been described for performing computational tasks of signal processing and wireless transmission. In the next section remarks have been provided regarding the basic signal processing rules beginning with raw voltage samples of converted optical signals, their acquisition, storage and smoothing. This chapter ends with practical remarks demonstrating an exponential dependence between the minimum measurable heart rate and the readout resolution at different sampling frequencies for different cases of averaging depth (in bits). The following section is devoted strictly to the heart rate and hemoglobin oxygenation (saturation) measurement with the use of the presented platform, referenced to measurements obtained with a stationary certified pulsoxymeter.

  12. First-order transitions and thermodynamic properties in the 2D Blume-Capel model: the transfer-matrix method revisited

    NASA Astrophysics Data System (ADS)

    Jung, Moonjung; Kim, Dong-Hee

    2017-12-01

    We investigate the first-order transition in the spin-1 two-dimensional Blume-Capel model in square lattices by revisiting the transfer-matrix method. With large strip widths increased up to the size of 18 sites, we construct the detailed phase coexistence curve which shows excellent quantitative agreement with the recent advanced Monte Carlo results. In the deep first-order area, we observe the exponential system-size scaling of the spectral gap of the transfer matrix from which linearly increasing interfacial tension is deduced with decreasing temperature. We find that the first-order signature at low temperatures is strongly pronounced with much suppressed finite-size influence in the examined thermodynamic properties of entropy, non-zero spin population, and specific heat. It turns out that the jump at the transition becomes increasingly sharp as it goes deep into the first-order area, which is in contrast to the Wang-Landau results where finite-size smoothing gets more severe at lower temperatures.

  13. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  14. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  15. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  16. Construction of exponentially fitted symplectic Runge-Kutta-Nyström methods from partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.

    2014-10-01

    In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.

  17. Convergence and stability of the exponential Euler method for semi-linear stochastic delay differential equations.

    PubMed

    Zhang, Ling

    2017-01-01

    The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.

  18. GF-3 SAR Image Despeckling Based on the Improved Non-Local Means Using Non-Subsampled Shearlet Transform

    NASA Astrophysics Data System (ADS)

    Shi, R.; Sun, Z.

    2018-04-01

    GF-3 synthetic aperture radar (SAR) images are rich in information and have obvious sparse features. However, the speckle appears in the GF-3 SAR images due to the coherent imaging system and it hinders the interpretation of images seriously. Recently, Shearlet is applied to the image processing with its best sparse representation. A new Shearlet-transform-based method is proposed in this paper based on the improved non-local means. Firstly, the logarithmic operation and the non-subsampled Shearlet transformation are applied to the GF-3 SAR image. Secondly, in order to solve the problems that the image details are smoothed overly and the weight distribution is affected by the speckle, a new non-local means is used for the transformed high frequency coefficient. Thirdly, the Shearlet reconstruction is carried out. Finally, the final filtered image is obtained by an exponential operation. Experimental results demonstrate that, compared with other despeckling methods, the proposed method can suppress the speckle effectively in homogeneous regions and has better capability of edge preserving.

  19. A review of the matrix-exponential formalism in radiative transfer

    NASA Astrophysics Data System (ADS)

    Efremenko, Dmitry S.; Molina García, Víctor; Gimeno García, Sebastián; Doicu, Adrian

    2017-07-01

    This paper outlines the matrix exponential description of radiative transfer. The eigendecomposition method which serves as a basis for computing the matrix exponential and for representing the solution in a discrete ordinate setting is considered. The mathematical equivalence of the discrete ordinate method, the matrix operator method, and the matrix Riccati equations method is proved rigorously by means of the matrix exponential formalism. For optically thin layers, approximate solution methods relying on the Padé and Taylor series approximations to the matrix exponential, as well as on the matrix Riccati equations, are presented. For optically thick layers, the asymptotic theory with higher-order corrections is derived, and parameterizations of the asymptotic functions and constants for a water-cloud model with a Gamma size distribution are obtained.

  20. Exponentially damped Lévy flights, multiscaling, and exchange rates

    NASA Astrophysics Data System (ADS)

    Matsushita, Raul; Gleria, Iram; Figueiredo, Annibal; Rathie, Pushpa; Da Silva, Sergio

    2004-02-01

    We employ our previously suggested exponentially damped Lévy flight (Physica A 326 (2003) 544) to study the multiscaling properties of 30 daily exchange rates against the US dollar together with a fictitious euro-dollar rate (Physica A 286 (2000) 353). Though multiscaling is not theoretically seen in either stable Lévy processes or abruptly truncated Lévy flights, it is even characteristic of smoothly truncated Lévy flights (Phys. Lett. A 266 (2000) 282; Eur. Phys. J. B 4 (1998) 143). We have already defined a class of “quasi-stable” processes in connection with the finding that single scaling is pervasive among the dollar price of foreign currencies (Physica A 323 (2003) 601). Here we show that the same goes as far as multiscaling is concerned. Our novel findings incidentally reinforce the case for real-world relevance of the Lévy flights for modeling financial prices.

  1. Optical coherence tomography assessment of vessel wall degradation in aneurysmatic thoracic aortas

    NASA Astrophysics Data System (ADS)

    Real, Eusebio; Eguizabal, Alma; Pontón, Alejandro; Val-Bernal, J. Fernando; Mayorga, Marta; Revuelta, José M.; López-Higuera, José; Conde, Olga M.

    2013-06-01

    Optical coherence tomographic images of ascending thoracic human aortas from aneurysms exhibit disorders on the smooth muscle cell structure of the media layer of the aortic vessel as well as elastin degradation. Ex-vivo measurements of human samples provide results that correlate with pathologist diagnosis in aneurysmatic and control aortas. The observed disorders are studied as possible hallmarks for aneurysm diagnosis. To this end, the backscattering profile along the vessel thickness has been evaluated by fitting its decay against two different models, a third order polynomial fitting and an exponential fitting. The discontinuities present on the vessel wall on aneurysmatic aortas are slightly better identified with the exponential approach. Aneurysmatic aortic walls present uneven reflectivity decay when compared with healthy vessels. The fitting error has revealed as the most favorable indicator for aneurysm diagnosis as it provides a measure of how uniform is the decay along the vessel thickness.

  2. Experimental tests of truncated diffusion in fault damage zones

    NASA Astrophysics Data System (ADS)

    Suzuki, Anna; Hashida, Toshiyuki; Li, Kewen; Horne, Roland N.

    2016-11-01

    Fault zones affect the flow paths of fluids in groundwater aquifers and geological reservoirs. Fault-related fracture damage decreases to background levels with increasing distance from the fault core according to a power law. This study investigated mass transport in such a fault-related structure using nonlocal models. A column flow experiment is conducted to create a permeability distribution that varies with distance from a main conduit. The experimental tracer response curve is preasymptotic and implies subdiffusive transport, which is slower than the normal Fickian diffusion. If the surrounding area is a finite domain, an upper truncated behavior in tracer response (i.e., exponential decline at late times) is observed. The tempered anomalous diffusion (TAD) model captures the transition from subdiffusive to Fickian transport, which is characterized by a smooth transition from power-law to an exponential decline in the late-time breakthrough curves.

  3. The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models

    DTIC Science & Technology

    1988-07-27

    auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the

  4. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    PubMed

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  5. Shock wave refraction enhancing conditions on an extended interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markhotok, A.; Popovic, S.

    2013-04-15

    We determined the law of shock wave refraction for a class of extended interfaces with continuously variable gradients. When the interface is extended or when the gas parameters vary fast enough, the interface cannot be considered as sharp or smooth and the existing calculation methods cannot be applied. The expressions we derived are general enough to cover all three types of the interface and are valid for any law of continuously varying parameters. We apply the equations to the case of exponentially increasing temperature on the boundary and compare the results for all three types of interfaces. We have demonstratedmore » that the type of interface can increase or inhibit the shock wave refraction. Our findings can be helpful in understanding the results obtained in energy deposition experiments as well as for controlling the shock-plasma interaction in other settings.« less

  6. Short-term leprosy forecasting from an expert opinion survey.

    PubMed

    Deiner, Michael S; Worden, Lee; Rittel, Alex; Ackley, Sarah F; Liu, Fengchen; Blum, Laura; Scott, James C; Lietman, Thomas M; Porco, Travis C

    2017-01-01

    We conducted an expert survey of leprosy (Hansen's Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health.

  7. Short-term leprosy forecasting from an expert opinion survey

    PubMed Central

    Deiner, Michael S.; Worden, Lee; Rittel, Alex; Ackley, Sarah F.; Liu, Fengchen; Blum, Laura; Scott, James C.; Lietman, Thomas M.

    2017-01-01

    We conducted an expert survey of leprosy (Hansen’s Disease) and neglected tropical disease experts in February 2016. Experts were asked to forecast the next year of reported cases for the world, for the top three countries, and for selected states and territories of India. A total of 103 respondents answered at least one forecasting question. We elicited lower and upper confidence bounds. Comparing these results to regression and exponential smoothing, we found no evidence that any forecasting method outperformed the others. We found evidence that experts who believed it was more likely to achieve global interruption of transmission goals and disability reduction goals had higher error scores for India and Indonesia, but lower for Brazil. Even for a disease whose epidemiology changes on a slow time scale, forecasting exercises such as we conducted are simple and practical. We believe they can be used on a routine basis in public health. PMID:28813531

  8. Fluid particles only separate exponentially in the dissipation range of turbulence after extremely long times

    NASA Astrophysics Data System (ADS)

    Dhariwal, Rohit; Bragg, Andrew D.

    2018-03-01

    In this paper, we consider how the statistical moments of the separation between two fluid particles grow with time when their separation lies in the dissipation range of turbulence. In this range, the fluid velocity field varies smoothly and the relative velocity of two fluid particles depends linearly upon their separation. While this may suggest that the rate at which fluid particles separate is exponential in time, this is not guaranteed because the strain rate governing their separation is a strongly fluctuating quantity in turbulence. Indeed, Afik and Steinberg [Nat. Commun. 8, 468 (2017), 10.1038/s41467-017-00389-8] argue that there is no convincing evidence that the moments of the separation between fluid particles grow exponentially with time in the dissipation range of turbulence. Motivated by this, we use direct numerical simulations (DNS) to compute the moments of particle separation over very long periods of time in a statistically stationary, isotropic turbulent flow to see if we ever observe evidence for exponential separation. Our results show that if the initial separation between the particles is infinitesimal, the moments of the particle separation first grow as power laws in time, but we then observe convincing evidence that at sufficiently long times the moments do grow exponentially. However, this exponential growth is only observed after extremely long times ≳200 τη , where τη is the Kolmogorov time scale. This is due to fluctuations in the strain rate about its mean value measured along the particle trajectories, the effect of which on the moments of the particle separation persists for very long times. We also consider the backward-in-time (BIT) moments of the article separation, and observe that they too grow exponentially in the long-time regime. However, a dramatic consequence of the exponential separation is that at long times the difference between the rate of the particle separation forward in time (FIT) and BIT grows exponentially in time, leading to incredibly strong irreversibility in the dispersion. This is in striking contrast to the irreversibility of their relative dispersion in the inertial range, where the difference between FIT and BIT is constant in time according to Richardson's phenomenology.

  9. Exponential integrators in time-dependent density-functional calculations

    NASA Astrophysics Data System (ADS)

    Kidd, Daniel; Covington, Cody; Varga, Kálmán

    2017-12-01

    The integrating factor and exponential time differencing methods are implemented and tested for solving the time-dependent Kohn-Sham equations. Popular time propagation methods used in physics, as well as other robust numerical approaches, are compared to these exponential integrator methods in order to judge the relative merit of the computational schemes. We determine an improvement in accuracy of multiple orders of magnitude when describing dynamics driven primarily by a nonlinear potential. For cases of dynamics driven by a time-dependent external potential, the accuracy of the exponential integrator methods are less enhanced but still match or outperform the best of the conventional methods tested.

  10. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  11. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry.

    PubMed

    Sundararaman, Ravishankar; Goddard, William A; Arias, Tomas A

    2017-03-21

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solve the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Finally, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.

  12. Grand canonical electronic density-functional theory: Algorithms and applications to electrochemistry

    DOE PAGES

    Sundararaman, Ravishankar; Goddard, III, William A.; Arias, Tomas A.

    2017-03-16

    First-principles calculations combining density-functional theory and continuum solvation models enable realistic theoretical modeling and design of electrochemical systems. When a reaction proceeds in such systems, the number of electrons in the portion of the system treated quantum mechanically changes continuously, with a balancing charge appearing in the continuum electrolyte. A grand-canonical ensemble of electrons at a chemical potential set by the electrode potential is therefore the ideal description of such systems that directly mimics the experimental condition. We present two distinct algorithms: a self-consistent field method and a direct variational free energy minimization method using auxiliary Hamiltonians (GC-AuxH), to solvemore » the Kohn-Sham equations of electronic density-functional theory directly in the grand canonical ensemble at fixed potential. Both methods substantially improve performance compared to a sequence of conventional fixed-number calculations targeting the desired potential, with the GC-AuxH method additionally exhibiting reliable and smooth exponential convergence of the grand free energy. Lastly, we apply grand-canonical density-functional theory to the under-potential deposition of copper on platinum from chloride-containing electrolytes and show that chloride desorption, not partial copper monolayer formation, is responsible for the second voltammetric peak.« less

  13. Sodium 22+ washout from cultured rat cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kino, M.; Nakamura, A.; Hopp, L.

    1986-10-01

    The washout of Na/sup +/ isotopes from tissues and cells is quite complex and not well defined. To further gain insight into this process, we have studied /sup 22/Na/sup +/ washout from cultured Wistar rat skin fibroblasts and vascular smooth muscle cells (VSMCs). In these preparations, /sup 22/Na/sup +/ washout is described by a general three-exponential function. The exponential factor of the fastest component (k1) and the initial exchange rate constant (kie) of cultured fibroblasts decrease in magnitude in response to incubation in K+-deficient medium or in the presence of ouabain and increase in magnitude when the cells are incubatedmore » in a Ca++-deficient medium. As the magnitude of the kie declines (in the presence of ouabain) to the level of the exponential factor of the middle component (k2), /sup 22/Na/sup +/ washout is adequately described by a two-exponential function. When the kie is further diminished (in the presence of both ouabain and phloretin) to the range of the exponential factor of the slowest component (k3), the washout of /sup 22/Na/sup +/ is apparently monoexponential. Calculations of the cellular Na/sup +/ concentrations, based on the /sup 22/Na/sup +/ activity in the cells at the initiation of the washout experiments, and the medium specific activity agree with atomic absorption spectrometry measurements of the cellular concentration of this ion. Thus, all three components of /sup 22/Na/sup +/ washout from cultured rat cells are of cellular origin. Using the exponential parameters, compartmental analyses of two models (in parallel and in series) with three cellular Na/sup +/ pools were performed. The results indicate that, independent of the model chosen, the relative size of the largest Na+ pool is 92-93% in fibroblasts and approximately 96% in VSMCs. This pool is most likely to represent the cytosol.« less

  14. On High-Order Radiation Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas

    1995-01-01

    In this paper we develop the theory of high-order radiation boundary conditions for wave propagation problems. In particular, we study the convergence of sequences of time-local approximate conditions to the exact boundary condition, and subsequently estimate the error in the solutions obtained using these approximations. We show that for finite times the Pade approximants proposed by Engquist and Majda lead to exponential convergence if the solution is smooth, but that good long-time error estimates cannot hold for spatially local conditions. Applications in fluid dynamics are also discussed.

  15. Demand forecasting of electricity in Indonesia with limited historical data

    NASA Astrophysics Data System (ADS)

    Dwi Kartikasari, Mujiati; Rohmad Prayogi, Arif

    2018-03-01

    Demand forecasting of electricity is an important activity for electrical agents to know the description of electricity demand in future. Prediction of demand electricity can be done using time series models. In this paper, double moving average model, Holt’s exponential smoothing model, and grey model GM(1,1) are used to predict electricity demand in Indonesia under the condition of limited historical data. The result shows that grey model GM(1,1) has the smallest value of MAE (mean absolute error), MSE (mean squared error), and MAPE (mean absolute percentage error).

  16. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  17. Cross-bridge elasticity in single smooth muscle cells

    PubMed Central

    1983-01-01

    In smooth muscle, a cross-bridge mechanism is believed to be responsible for active force generation and fiber shortening. In the present studies, the viscoelastic and kinetic properties of the cross- bridge were probed by eliciting tension transients in response to small, rapid, step length changes (delta L = 0.3-1.0% Lcell in 2 ms). Tension transients were obtained in a single smooth muscle cell isolated from the toad (Bufo marinus) stomach muscularis, which was tied between a force transducer and a displacement device. To record the transients, which were of extremely small magnitude (0.1 microN), a high-frequency (400 Hz), ultrasensitive force transducer (18 mV/microN) was designed and built. The transients obtained during maximal force generation (Fmax = 2.26 microN) were characterized by a linear elastic response (Emax = 1.26 X 10(4) mN/mm2) coincident with the length step, which was followed by a biphasic tension recovery made up of two exponentials (tau fast = 5-20 ms, tau slow = 50-300 ms). During the development of force upon activation, transients were elicited. The relationship between stiffness and force was linear, which suggests that the transients originate within the cross-bridge and reflect the cross-bridge's viscoelastic and kinetic properties. The observed fiber elasticity suggests that the smooth muscle cross-bridge is considerably more compliant than in fast striated muscle. A thermodynamic model is presented that allows for an analysis of the factors contributing to the increased compliance of the smooth muscle cross-bridge. PMID:6413640

  18. GENIE(++): A Multi-Block Structured Grid System

    NASA Technical Reports Server (NTRS)

    Williams, Tonya; Nadenthiran, Naren; Thornburg, Hugh; Soni, Bharat K.

    1996-01-01

    The computer code GENIE++ is a continuously evolving grid system containing a multitude of proven geometry/grid techniques. The generation process in GENIE++ is based on an earlier version. The process uses several techniques either separately or in combination to quickly and economically generate sculptured geometry descriptions and grids for arbitrary geometries. The computational mesh is formed by using an appropriate algebraic method. Grid clustering is accomplished with either exponential or hyperbolic tangent routines which allow the user to specify a desired point distribution. Grid smoothing can be accomplished by using an elliptic solver with proper forcing functions. B-spline and Non-Uniform Rational B-splines (NURBS) algorithms are used for surface definition and redistribution. The built in sculptured geometry definition with desired distribution of points, automatic Bezier curve/surface generation for interior boundaries/surfaces, and surface redistribution is based on NURBS. Weighted Lagrance/Hermite transfinite interpolation methods, interactive geometry/grid manipulation modules, and on-line graphical visualization of the generation process are salient features of this system which result in a significant time savings for a given geometry/grid application.

  19. The exponential behavior and stabilizability of the stochastic magnetohydrodynamic equations

    NASA Astrophysics Data System (ADS)

    Wang, Huaqiao

    2018-06-01

    This paper studies the two-dimensional stochastic magnetohydrodynamic equations which are used to describe the turbulent flows in magnetohydrodynamics. The exponential behavior and the exponential mean square stability of the weak solutions are proved by the application of energy method. Furthermore, we establish the pathwise exponential stability by using the exponential mean square stability. When the stochastic perturbations satisfy certain additional hypotheses, we can also obtain pathwise exponential stability results without using the mean square stability.

  20. Location priority for non-formal early childhood education school based on promethee method and map visualization

    NASA Astrophysics Data System (ADS)

    Ayu Nurul Handayani, Hemas; Waspada, Indra

    2018-05-01

    Non-formal Early Childhood Education (non-formal ECE) is an education that is held for children under 4 years old. The implementation in District of Banyumas, Non-formal ECE is monitored by The District Government of Banyumas and helped by Sanggar Kegiatan Belajar (SKB) Purwokerto as one of the organizer of Non-formal Education. The government itself has a program for distributing ECE to all villages in Indonesia. However, The location to construct the ECE school in several years ahead is not arranged yet. Therefore, for supporting that program, a decision support system is made to give some recommendation villages for constructing The ECE building. The data are projected based on Brown’s Double Exponential Smoothing Method and utilizing Preference Ranking Organization Method for Enrichment Evaluation (Promethee) to generate priority order. As the recommendations system, it generates map visualization which is colored according to the priority level of sub-district and village area. The system was tested with black box testing, Promethee testing, and usability testing. The results showed that the system functionality and Promethee algorithm were working properly, and the user was satisfied.

  1. Models for Train Passenger Forecasting of Java and Sumatra

    NASA Astrophysics Data System (ADS)

    Sartono

    2017-04-01

    People tend to take public transportation to avoid high traffic, especially in Java. In Jakarta, the number of railway passengers is over than the capacity of the train at peak time. This is an opportunity as well as a challenge. If it is managed well then the company can get high profit. Otherwise, it may lead to disaster. This article discusses models for the train passengers, hence, finding the reasonable models to make a prediction overtimes. The Box-Jenkins method is occupied to develop a basic model. Then, this model is compared to models obtained using exponential smoothing method and regression method. The result shows that Holt-Winters model is better to predict for one-month, three-month, and six-month ahead for the passenger in Java. In addition, SARIMA(1,1,0)(2,0,0) is more accurate for nine-month and twelve-month oversee. On the other hand, for Sumatra passenger forecasting, SARIMA(1,1,1)(0,0,2) gives a better approximation for one-month ahead, and ARIMA model is best for three-month ahead prediction. The rest, Trend Seasonal and Liner Model has the least of RMSE to forecast for six-month, nine-month, and twelve-month ahead.

  2. SU-E-T-259: Particle Swarm Optimization in Radial Dose Function Fitting for a Novel Iodine-125 Seed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X; Duan, J; Popple, R

    2014-06-01

    Purpose: To determine the coefficients of bi- and tri-exponential functions for the best fit of radial dose functions of the new iodine brachytherapy source: Iodine-125 Seed AgX-100. Methods: The particle swarm optimization (PSO) method was used to search for the coefficients of the biand tri-exponential functions that yield the best fit to data published for a few selected radial distances from the source. The coefficients were encoded into particles, and these particles move through the search space by following their local and global best-known positions. In each generation, particles were evaluated through their fitness function and their positions were changedmore » through their velocities. This procedure was repeated until the convergence criterion was met or the maximum generation was reached. All best particles were found in less than 1,500 generations. Results: For the I-125 seed AgX-100 considered as a point source, the maximum deviation from the published data is less than 2.9% for bi-exponential fitting function and 0.2% for tri-exponential fitting function. For its line source, the maximum deviation is less than 1.1% for bi-exponential fitting function and 0.08% for tri-exponential fitting function. Conclusion: PSO is a powerful method in searching coefficients for bi-exponential and tri-exponential fitting functions. The bi- and tri-exponential models of Iodine-125 seed AgX-100 point and line sources obtained with PSO optimization provide accurate analytical forms of the radial dose function. The tri-exponential fitting function is more accurate than the bi-exponential function.« less

  3. Inhomogeneous growth of fluctuations of concentration of inertial particles in channel turbulence

    NASA Astrophysics Data System (ADS)

    Fouxon, Itzhak; Schmidt, Lukas; Ditlevsen, Peter; van Reeuwijk, Maarten; Holzner, Markus

    2018-06-01

    We study the growth of concentration fluctuations of weakly inertial particles in the turbulent channel flow starting with a smooth initial distribution. The steady-state concentration is singular and multifractal so the growth describes the increasingly rugged structure of the distribution. We demonstrate that inhomogeneity influences the growth of concentration fluctuations profoundly. For homogeneous turbulence the growth is exponential and is fully determined by Kolmogorov scale eddies.We derive lognormality of the statistics in this case. The growth exponents of the moments are proportional to the sum of Lyapunov exponents, which is quadratic in the small inertia of the particles. In contrast, for inhomogeneous turbulence the growth is linear in inertia. It involves correlations of inertial range and viscous scale eddies that turn the growth into a stretched exponential law with exponent three halves. We demonstrate using direct numerical simulations that the resulting growth rate can differ by orders of magnitude over channel height. This strong variation might have relevance in the planetary boundary layer.

  4. All-sky monitor observations of the decay of A0620-00 (Nova monocerotis 1975)

    NASA Technical Reports Server (NTRS)

    Kaluzienski, L. J.; Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J.

    1976-01-01

    The All-Sky X-ray Monitor onboard Ariel 5 has observed the 3-6 keV decline of the bright transient X-ray source A0620-00 on a virtually continuous basis during the period September 1975 - March 1976. The source behavior on timescales 100 minutes is characterized by smooth, exponential decays interrupted by substantial increases in October and February. The latter increase was an order-of-magnitude rise above the extrapolated exponential fall-off, and was followed by a final rapid decline. Upper limits of 2.5% and 10% were found for any periodicities in the range 0d.2 - 10d during the early and later decay phases, respectively. A probable correlation between the optical and 3-6 keV emission from A0620-00 was noted, effectively ruling out models involving traditional optical novae in favor of Roche-lobe overflow in a binary system. The existing data on the transient X-ray sources is consistent with two distinct luminosity-lifetime classes of these objects.

  5. KAM tori and whiskered invariant tori for non-autonomous systems

    NASA Astrophysics Data System (ADS)

    Canadell, Marta; de la Llave, Rafael

    2015-08-01

    We consider non-autonomous dynamical systems which converge to autonomous (or periodic) systems exponentially fast in time. Such systems appear naturally as models of many physical processes affected by external pulses. We introduce definitions of non-autonomous invariant tori and non-autonomous whiskered tori and their invariant manifolds and we prove their persistence under small perturbations, smooth dependence on parameters and several geometric properties (if the systems are Hamiltonian, the tori are Lagrangian manifolds). We note that such definitions are problematic for general time-dependent systems, but we show that they are unambiguous for systems converging exponentially fast to autonomous. The proof of persistence relies only on a standard Implicit Function Theorem in Banach spaces and it does not require that the rotations in the tori are Diophantine nor that the systems we consider preserve any geometric structure. We only require that the autonomous system preserves these objects. In particular, when the autonomous system is integrable, we obtain the persistence of tori with rational rotational. We also discuss fast and efficient algorithms for their computation. The method also applies to infinite dimensional systems which define a good evolution, e.g. PDE's. When the systems considered are Hamiltonian, we show that the time dependent invariant tori are isotropic. Hence, the invariant tori of maximal dimension are Lagrangian manifolds. We also obtain that the (un)stable manifolds of whiskered tori are Lagrangian manifolds. We also include a comparison with the more global theory developed in Blazevski and de la Llave (2011).

  6. Two Point Exponential Approximation Method for structural optimization of problems with frequency constraints

    NASA Technical Reports Server (NTRS)

    Fadel, G. M.

    1991-01-01

    The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.

  7. Robust Smoothing: Smoothing Parameter Selection and Applications to Fluorescence Spectroscopy∂

    PubMed Central

    Lee, Jong Soo; Cox, Dennis D.

    2009-01-01

    Fluorescence spectroscopy has emerged in recent years as an effective way to detect cervical cancer. Investigation of the data preprocessing stage uncovered a need for a robust smoothing to extract the signal from the noise. Various robust smoothing methods for estimating fluorescence emission spectra are compared and data driven methods for the selection of smoothing parameter are suggested. The methods currently implemented in R for smoothing parameter selection proved to be unsatisfactory, and a computationally efficient procedure that approximates robust leave-one-out cross validation is presented. PMID:20729976

  8. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.; Mueller, C. S.; Boyd, O. S.; Petersen, M. D.

    2013-12-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  9. Comparison of smoothing methods for the development of a smoothed seismicity model for Alaska and the implications for seismic hazard

    USGS Publications Warehouse

    Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.

    2014-01-01

    In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood values from all rate models to rank the smoothing methods. We find that adaptively smoothed seismicity models yield better likelihood values than the fixed smoothing models. Holding all other (source and ground motion) models constant, we calculate seismic hazard curves for all points across Alaska on a 0.1 degree grid, using the adaptively smoothed and fixed smoothed seismicity models separately. Because adaptively smoothed models concentrate seismicity near the earthquake epicenters where seismicity rates are high, the corresponding hazard values are higher, locally, but reduced with distance from observed seismicity, relative to the hazard from fixed-bandwidth models. We suggest that adaptively smoothed seismicity models be considered for implementation in the update to the ASHMs because of their improved likelihood estimates relative to fixed smoothing methods; however, concomitant increases in seismic hazard will cause significant changes in regions of high seismicity, such as near the subduction zone, northeast of Kotzebue, and along the NNE trending zone of seismicity in the Alaskan interior.

  10. Time variation analysis of the daily Forbush decrease indices

    NASA Astrophysics Data System (ADS)

    Patra, Sankar Narayan; Ghosh, Koushik; Panja, Subhash Chandra

    2011-08-01

    In the present paper we have analyzed the daily Forbush decrease indices from January 1, 1967 to December 31, 2003. First filtering the time series by Simple Exponential Smoothing, we have applied Scargle Method of Periodogram on the processed time series in order to search for its time variation. Study exhibits periodicities around 174, 245, 261, 321, 452, 510, 571, 584, 662, 703, 735, 741, 767, 774, 820, 970, 1062, 1082, 1489, 1715, 2317, 2577, 2768, 3241 and 10630 days with confidence levels higher than 90%. Some of these periods are significantly similar to the observed periodicities of other solar activities, like solar filament activity, solar electron flare occurrence, solar-flare rate, solar proton events, solar neutrino flux, solar irradiance, cosmic ray intensity and flare, spectrum of the sunspot, solar wind, southern coronal hole area and solar cycle, which may suggest that the Forbush decrease behaves similarly to these solar activities and these activities may have a common origin.

  11. Path following control of planar snake robots using virtual holonomic constraints: theory and experiments.

    PubMed

    Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni

    This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.

  12. Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data

    PubMed Central

    Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.

    2013-01-01

    Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689

  13. On the performance of exponential integrators for problems in magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Einkemmer, Lukas; Tokman, Mayya; Loffeld, John

    2017-02-01

    Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.

  14. Inflationary dynamics with a smooth slow-roll to constant-roll era transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odintsov, S.D.; Oikonomou, V.K., E-mail: odintsov@ieec.uab.es, E-mail: v.k.oikonomou1979@gmail.com

    In this paper we investigate the implications of having a varying second slow-roll index on the canonical scalar field inflationary dynamics. We shall be interested in cases that the second slow-roll can take small values and correspondingly large values, for limiting cases of the function that quantifies the variation of the second slow-roll index. As we demonstrate, this can naturally introduce a smooth transition between slow-roll and constant-roll eras. We discuss the theoretical implications of the mechanism we introduce and we use various illustrative examples in order to better understand the new features that the varying second slow-roll index introduces.more » In the examples we will present, the second slow-roll index has exponential dependence on the scalar field, and in one of these cases, the slow-roll era corresponds to a type of α-attractor inflation. Finally, we briefly discuss how the combination of slow-roll and constant-roll may lead to non-Gaussianities in the primordial perturbations.« less

  15. Multi-Level Adaptive Techniques (MLAT) for singular-perturbation problems

    NASA Technical Reports Server (NTRS)

    Brandt, A.

    1978-01-01

    The multilevel (multigrid) adaptive technique, a general strategy of solving continuous problems by cycling between coarser and finer levels of discretization is described. It provides very fast general solvers, together with adaptive, nearly optimal discretization schemes. In the process, boundary layers are automatically either resolved or skipped, depending on a control function which expresses the computational goal. The global error decreases exponentially as a function of the overall computational work, in a uniform rate independent of the magnitude of the singular-perturbation terms. The key is high-order uniformly stable difference equations, and uniformly smoothing relaxation schemes.

  16. A Fuzzy-Based Control Method for Smoothing Power Fluctuations in Substations along High-Speed Railways

    NASA Astrophysics Data System (ADS)

    Sugio, Tetsuya; Yamamoto, Masayoshi; Funabiki, Shigeyuki

    The use of an SMES (Superconducting Magnetic Energy Storage) for smoothing power fluctuations in a railway substation has been discussed. This paper proposes a smoothing control method based on fuzzy reasoning for reducing the SMES capacity at substations along high-speed railways. The proposed smoothing control method comprises three countermeasures for reduction of the SMES capacity. The first countermeasure involves modification of rule 1 for smoothing out the fluctuating electric power to its average value. The other countermeasures involve the modification of the central value of the stored energy control in the SMES and revision of the membership function in rule 2 for reduction of the SMES capacity. The SMES capacity in the proposed smoothing control method is reduced by 49.5% when compared to that in the nonrevised control method. It is confirmed by computer simulations that the proposed control method is suitable for smoothing out power fluctuations in substations along high-speed railways and for reducing the SMES capacity.

  17. Simulated Annealing in the Variable Landscape

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Kim, Chang Ju

    An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.

  18. Attractors of three-dimensional fast-rotating Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Trahe, Markus

    The three-dimensional (3-D) rotating Navier-Stokes equations describe the dynamics of rotating, incompressible, viscous fluids. In this work, they are considered with smooth, time-independent forces and the original statements implied by the classical "Taylor-Proudman Theorem" of geophysics are rigorously proved. It is shown that fully developed turbulence of 3-D fast-rotating fluids is essentially characterized by turbulence of two-dimensional (2-D) fluids in terms of numbers of degrees of freedom. In this context, the 3-D nonlinear "resonant limit equations", which arise in a non-linear averaging process as the rotation frequency O → infinity, are studied and optimal (2-D-type) upper bounds for fractal box and Hausdorff dimensions of the global attractor as well as upper bounds for box dimensions of exponential attractors are determined. Then, the convergence of exponential attractors for the full 3-D rotating Navier-Stokes equations to exponential attractors for the resonant limit equations as O → infinity in the sense of full Hausdorff-metric distances is established. This provides upper and lower semi-continuity of exponential attractors with respect to the rotation frequency and implies that the number of degrees of freedom (attractor dimension) of 3-D fast-rotating fluids is close to that of 2-D fluids. Finally, the algebraic-geometric structure of the Poincare curves, which control the resonances and small divisor estimates for partial differential equations, is further investigated; the 3-D nonlinear limit resonant operators are characterized by three-wave interactions governed by these curves. A new canonical transformation between those curves is constructed; with far-reaching consequences on the density of the latter.

  19. Category-Specific Comparison of Univariate Alerting Methods for Biosurveillance Decision Support

    PubMed Central

    Elbert, Yevgeniy; Hung, Vivian; Burkom, Howard

    2013-01-01

    Objective For a multi-source decision support application, we sought to match univariate alerting algorithms to surveillance data types to optimize detection performance. Introduction Temporal alerting algorithms commonly used in syndromic surveillance systems are often adjusted for data features such as cyclic behavior but are subject to overfitting or misspecification errors when applied indiscriminately. In a project for the Armed Forces Health Surveillance Center to enable multivariate decision support, we obtained 4.5 years of out-patient, prescription and laboratory test records from all US military treatment facilities. A proof-of-concept project phase produced 16 events with multiple evidence corroboration for comparison of alerting algorithms for detection performance. We used the representative streams from each data source to compare sensitivity of 6 algorithms to injected spikes, and we used all data streams from 16 known events to compare them for detection timeliness. Methods The six methods compared were: Holt-Winters generalized exponential smoothing method (1)automated choice between daily methods, regression and an exponential weighted moving average (2)adaptive daily Shewhart-type chartadaptive one-sided daily CUSUMEWMA applied to 7-day means with a trend correction; and7-day temporal scan statistic Sensitivity testing: We conducted comparative sensitivity testing for categories of time series with similar scales and seasonal behavior. We added multiples of the standard deviation of each time series as single-day injects in separate algorithm runs. For each candidate method, we then used as a sensitivity measure the proportion of these runs for which the output of each algorithm was below alerting thresholds estimated empirically for each algorithm using simulated data streams. We identified the algorithm(s) whose sensitivity was most consistently high for each data category. For each syndromic query applied to each data source (outpatient, lab test orders, and prescriptions), 502 authentic time series were derived, one for each reporting treatment facility. Data categories were selected in order to group time series with similar expected algorithm performance: Median > 100 < Median ≤ 10Median = 0Lag 7 Autocorrelation Coefficient ≥ 0.2Lag 7 Autocorrelation Coefficient < 0.2 Timeliness testing: For the timeliness testing, we avoided artificiality of simulated signals by measuring alerting detection delays in the 16 corroborated outbreaks. The multiple time series from these events gave a total of 141 time series with outbreak intervals for timeliness testing. The following measures were computed to quantify timeliness of detection: Median Detection Delay – median number of days to detect the outbreak.Penalized Mean Detection Delay –mean number of days to detect the outbreak with outbreak misses penalized as 1 day plus the maximum detection time. Results Based on the injection results, the Holt-Winters algorithm was most sensitive among time series with positive medians. The adaptive CUSUM and the Shewhart methods were most sensitive for data streams with median zero. Table 1 provides timeliness results using the 141 outbreak-associated streams on sparse (Median=0) and non-sparse data categories. [Insert table #1 here] Data median Detection Delay, days Holt-winters Regression EWMA Adaptive Shewhart Adaptive CUSUM 7-day Trend-adj. EWMA 7-day Temporal Scan Median 0 Median 3 2 4 2 4.5 2 Penalized Mean 7.2 7 6.6 6.2 7.3 7.6 Median >0 Median 2 2 2.5 2 6 4 Penalized Mean 6.1 7 7.2 7.1 7.7 6.6 The gray shading in the table 1 indicates methods with shortest detection delays for sparse and non-sparse data streams. The Holt-Winters method was again superior for non-sparse data. For data with median=0, the adaptive CUSUM was superior for a daily false alarm probability of 0.01, but the Shewhart method was timelier for more liberal thresholds. Conclusions Both kinds of detection performance analysis showed the method based on Holt-Winters exponential smoothing superior on non-sparse time series with day-of-week effects. The adaptive CUSUM and She-whart methods proved optimal on sparse data and data without weekly patterns.

  20. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  1. Application and evaluation of forecasting methods for municipal solid waste generation in an Eastern-European city.

    PubMed

    Rimaityte, Ingrida; Ruzgas, Tomas; Denafas, Gintaras; Racys, Viktoras; Martuzevicius, Dainius

    2012-01-01

    Forecasting of generation of municipal solid waste (MSW) in developing countries is often a challenging task due to the lack of data and selection of suitable forecasting method. This article aimed to select and evaluate several methods for MSW forecasting in a medium-scaled Eastern European city (Kaunas, Lithuania) with rapidly developing economics, with respect to affluence-related and seasonal impacts. The MSW generation was forecast with respect to the economic activity of the city (regression modelling) and using time series analysis. The modelling based on social-economic indicators (regression implemented in LCA-IWM model) showed particular sensitivity (deviation from actual data in the range from 2.2 to 20.6%) to external factors, such as the synergetic effects of affluence parameters or changes in MSW collection system. For the time series analysis, the combination of autoregressive integrated moving average (ARIMA) and seasonal exponential smoothing (SES) techniques were found to be the most accurate (mean absolute percentage error equalled to 6.5). Time series analysis method was very valuable for forecasting the weekly variation of waste generation data (r (2) > 0.87), but the forecast yearly increase should be verified against the data obtained by regression modelling. The methods and findings of this study may assist the experts, decision-makers and scientists performing forecasts of MSW generation, especially in developing countries.

  2. Particle shape inhomogeneity and plasmon-band broadening of solar-control LaB6 nanoparticles

    NASA Astrophysics Data System (ADS)

    Machida, Keisuke; Adachi, Kenji

    2015-07-01

    An ensemble inhomogeneity of non-spherical LaB6 nanoparticles dispersion has been analyzed with Mie theory to account for the observed broad plasmon band. LaB6 particle shape has been characterized using small-angle X-ray scattering (SAXS) and electron tomography (ET). SAXS scattering intensity is found to vary exponentially with exponent -3.10, indicating the particle shape of disk toward sphere. ET analysis disclosed dually grouped distribution of nanoparticle dispersion; one is large-sized at small aspect ratio and the other is small-sized with scattered high aspect ratio, reflecting the dual fragmentation modes during the milling process. Mie extinction calculations have been integrated for 100 000 particles of varying aspect ratio, which were produced randomly by using the Box-Muller method. The Mie integration method has produced a broad and smooth absorption band expanded towards low energy, in remarkable agreement with experimental profiles by assuming a SAXS- and ET-derived shape distribution, i.e., a majority of disks with a little incorporation of rods and spheres for the ensemble. The analysis envisages a high potential of LaB6 with further-increased visible transparency and plasmon peak upon controlled particle-shape and its distribution.

  3. An adaptive segment method for smoothing lidar signal based on noise estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yuzhao; Luo, Pingping

    2014-10-01

    An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.

  4. Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking

    NASA Astrophysics Data System (ADS)

    Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.

    2009-08-01

    The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.

  5. An accurate and efficient method for evaluating the kernel of the integral equation relating pressure to normalwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  6. Method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1972-01-01

    Two computer programs developed according to two general types of exponential models for conducting nonlinear exponential regression analysis are described. Least squares procedure is used in which the nonlinear problem is linearized by expanding in a Taylor series. Program is written in FORTRAN 5 for the Univac 1108 computer.

  7. Approximation of the exponential integral (well function) using sampling methods

    NASA Astrophysics Data System (ADS)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  8. W-transform for exponential stability of second order delay differential equations without damping terms.

    PubMed

    Domoshnitsky, Alexander; Maghakyan, Abraham; Berezansky, Leonid

    2017-01-01

    In this paper a method for studying stability of the equation [Formula: see text] not including explicitly the first derivative is proposed. We demonstrate that although the corresponding ordinary differential equation [Formula: see text] is not exponentially stable, the delay equation can be exponentially stable.

  9. Parameterization of photon beam dosimetry for a linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lebron, Sharon; Barraclough, Brendan; Lu, Bo

    2016-02-15

    Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modeled data, (3) the quality assurance process of a linear accelerator’s (Linac) beam characteristics, (4) the establishment of a standard data set for comparison with other data, etcetera. Parameterization of the photon beam dosimetry creates a data set that is portable and easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon beam dosimetric quantities, includingmore » percentage depth doses (PDDs), profiles, and total scatter output factors (S{sub cp}). Methods: S{sub cp}, PDDs, and profiles for different field sizes, depths, and energies were measured for a Linac using a cylindrical 3D water scanning system. All data were smoothed for the analysis and profile data were also centered, symmetrized, and geometrically scaled. The S{sub cp} data were analyzed using an exponential function. The inverse square factor was removed from the PDD data before modeling and the data were subsequently analyzed using exponential functions. For profile modeling, one halfside of the profile was divided into three regions described by exponential, sigmoid, and Gaussian equations. All of the analytical functions are field size, energy, depth, and, in the case of profiles, scan direction specific. The model’s parameters were determined using the minimal amount of measured data necessary. The model’s accuracy was evaluated via the calculation of absolute differences between the measured (processed) and calculated data in low gradient regions and distance-to-agreement analysis in high gradient regions. Finally, the results of dosimetric quantities obtained by the fitted models for a different machine were also assessed. Results: All of the differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 and 0.5 mm, respectively. The differences in the low gradient regions were 0.20% ± 0.20% (<1% for all) and 0.50% ± 0.35% (<1% for all) for PDDs and profiles, respectively. For S{sub cp} data, all of the absolute differences were less than 0.5%. Conclusions: This novel analytical model with minimum measurement requirements was proved to accurately calculate PDDs, profiles, and S{sub cp} for different field sizes, depths, and energies.« less

  10. Linear and exponential TAIL-PCR: a method for efficient and quick amplification of flanking sequences adjacent to Tn5 transposon insertion sites.

    PubMed

    Jia, Xianbo; Lin, Xinjian; Chen, Jichen

    2017-11-02

    Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.

  11. Square Root Graphical Models: Multivariate Generalizations of Univariate Exponential Families that Permit Positive Dependencies

    PubMed Central

    Inouye, David I.; Ravikumar, Pradeep; Dhillon, Inderjit S.

    2016-01-01

    We develop Square Root Graphical Models (SQR), a novel class of parametric graphical models that provides multivariate generalizations of univariate exponential family distributions. Previous multivariate graphical models (Yang et al., 2015) did not allow positive dependencies for the exponential and Poisson generalizations. However, in many real-world datasets, variables clearly have positive dependencies. For example, the airport delay time in New York—modeled as an exponential distribution—is positively related to the delay time in Boston. With this motivation, we give an example of our model class derived from the univariate exponential distribution that allows for almost arbitrary positive and negative dependencies with only a mild condition on the parameter matrix—a condition akin to the positive definiteness of the Gaussian covariance matrix. Our Poisson generalization allows for both positive and negative dependencies without any constraints on the parameter values. We also develop parameter estimation methods using node-wise regressions with ℓ1 regularization and likelihood approximation methods using sampling. Finally, we demonstrate our exponential generalization on a synthetic dataset and a real-world dataset of airport delay times. PMID:27563373

  12. An Exponential Finite Difference Technique for Solving Partial Differential Equations. M.S. Thesis - Toledo Univ., Ohio

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that were more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.

  13. exponential finite difference technique for solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.

    1987-01-01

    An exponential finite difference algorithm, as first presented by Bhattacharya for one-dimensianal steady-state, heat conduction in Cartesian coordinates, has been extended. The finite difference algorithm developed was used to solve the diffusion equation in one-dimensional cylindrical coordinates and applied to two- and three-dimensional problems in Cartesian coordinates. The method was also used to solve nonlinear partial differential equations in one (Burger's equation) and two (Boundary Layer equations) dimensional Cartesian coordinates. Predicted results were compared to exact solutions where available, or to results obtained by other numerical methods. It was found that the exponential finite difference method produced results that weremore » more accurate than those obtained by other numerical methods, especially during the initial transient portion of the solution. Other applications made using the exponential finite difference technique included unsteady one-dimensional heat transfer with temperature varying thermal conductivity and the development of the temperature field in a laminar Couette flow.« less

  14. Method for exponentiating in cryptographic systems

    DOEpatents

    Brickell, Ernest F.; Gordon, Daniel M.; McCurley, Kevin S.

    1994-01-01

    An improved cryptographic method utilizing exponentiation is provided which has the advantage of reducing the number of multiplications required to determine the legitimacy of a message or user. The basic method comprises the steps of selecting a key from a preapproved group of integer keys g; exponentiating the key by an integer value e, where e represents a digital signature, to generate a value g.sup.e ; transmitting the value g.sup.e to a remote facility by a communications network; receiving the value g.sup.e at the remote facility; and verifying the digital signature as originating from the legitimate user. The exponentiating step comprises the steps of initializing a plurality of memory locations with a plurality of values g.sup.xi ; computi The United States Government has rights in this invention pursuant to Contract No. DE-AC04-76DP00789 between the Department of Energy and AT&T Company.

  15. A smoothed two- and three-dimensional interface reconstruction method

    DOE PAGES

    Mosso, Stewart; Garasi, Christopher; Drake, Richard

    2008-04-22

    The Patterned Interface Reconstruction algorithm reduces the discontinuity between material interfaces in neighboring computational elements. This smoothing improves the accuracy of the reconstruction for smooth bodies. The method can be used in two- and three-dimensional Cartesian and unstructured meshes. Planar interfaces will be returned for planar volume fraction distributions. Finally, the algorithm is second-order accurate for smooth volume fraction distributions.

  16. Spline-Based Smoothing of Airfoil Curvatures

    NASA Technical Reports Server (NTRS)

    Li, W.; Krist, S.

    2008-01-01

    Constrained fitting for airfoil curvature smoothing (CFACS) is a splinebased method of interpolating airfoil surface coordinates (and, concomitantly, airfoil thicknesses) between specified discrete design points so as to obtain smoothing of surface-curvature profiles in addition to basic smoothing of surfaces. CFACS was developed in recognition of the fact that the performance of a transonic airfoil is directly related to both the curvature profile and the smoothness of the airfoil surface. Older methods of interpolation of airfoil surfaces involve various compromises between smoothing of surfaces and exact fitting of surfaces to specified discrete design points. While some of the older methods take curvature profiles into account, they nevertheless sometimes yield unfavorable results, including curvature oscillations near end points and substantial deviations from desired leading-edge shapes. In CFACS as in most of the older methods, one seeks a compromise between smoothing and exact fitting. Unlike in the older methods, the airfoil surface is modified as little as possible from its original specified form and, instead, is smoothed in such a way that the curvature profile becomes a smooth fit of the curvature profile of the original airfoil specification. CFACS involves a combination of rigorous mathematical modeling and knowledge-based heuristics. Rigorous mathematical formulation provides assurance of removal of undesirable curvature oscillations with minimum modification of the airfoil geometry. Knowledge-based heuristics bridge the gap between theory and designers best practices. In CFACS, one of the measures of the deviation of an airfoil surface from smoothness is the sum of squares of the jumps in the third derivatives of a cubicspline interpolation of the airfoil data. This measure is incorporated into a formulation for minimizing an overall deviation- from-smoothness measure of the airfoil data within a specified fitting error tolerance. CFACS has been extensively tested on a number of supercritical airfoil data sets generated by inverse design and optimization computer programs. All of the smoothing results show that CFACS is able to generate unbiased smooth fits of curvature profiles, trading small modifications of geometry for increasing curvature smoothness by eliminating curvature oscillations and bumps (see figure).

  17. An approach for spherical harmonic analysis of non-smooth data

    NASA Astrophysics Data System (ADS)

    Wang, Hansheng; Wu, Patrick; Wang, Zhiyong

    2006-12-01

    A method is proposed to evaluate the spherical harmonic coefficients of a global or regional, non-smooth, observable dataset sampled on an equiangular grid. The method is based on an integration strategy using new recursion relations. Because a bilinear function is used to interpolate points within the grid cells, this method is suitable for non-smooth data; the slope of the data may be piecewise continuous, with extreme changes at the boundaries. In order to validate the method, the coefficients of an axisymmetric model are computed, and compared with the derived analytical expressions. Numerical results show that this method is indeed reasonable for non-smooth models, and that the maximum degree for spherical harmonic analysis should be empirically determined by several factors including the model resolution and the degree of non-smoothness in the dataset, and it can be several times larger than the total number of latitudinal grid points. It is also shown that this method is appropriate for the approximate analysis of a smooth dataset. Moreover, this paper provides the program flowchart and an internet address where the FORTRAN code with program specifications are made available.

  18. Exponential stability of stochastic complex networks with multi-weights based on graph theory

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Chen, Tianrui

    2018-04-01

    In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.

  19. Automated time series forecasting for biosurveillance.

    PubMed

    Burkom, Howard S; Murphy, Sean Patrick; Shmueli, Galit

    2007-09-30

    For robust detection performance, traditional control chart monitoring for biosurveillance is based on input data free of trends, day-of-week effects, and other systematic behaviour. Time series forecasting methods may be used to remove this behaviour by subtracting forecasts from observations to form residuals for algorithmic input. We describe three forecast methods and compare their predictive accuracy on each of 16 authentic syndromic data streams. The methods are (1) a non-adaptive regression model using a long historical baseline, (2) an adaptive regression model with a shorter, sliding baseline, and (3) the Holt-Winters method for generalized exponential smoothing. Criteria for comparing the forecasts were the root-mean-square error, the median absolute per cent error (MedAPE), and the median absolute deviation. The median-based criteria showed best overall performance for the Holt-Winters method. The MedAPE measures over the 16 test series averaged 16.5, 11.6, and 9.7 for the non-adaptive regression, adaptive regression, and Holt-Winters methods, respectively. The non-adaptive regression forecasts were degraded by changes in the data behaviour in the fixed baseline period used to compute model coefficients. The mean-based criterion was less conclusive because of the effects of poor forecasts on a small number of calendar holidays. The Holt-Winters method was also most effective at removing serial autocorrelation, with most 1-day-lag autocorrelation coefficients below 0.15. The forecast methods were compared without tuning them to the behaviour of individual series. We achieved improved predictions with such tuning of the Holt-Winters method, but practical use of such improvements for routine surveillance will require reliable data classification methods.

  20. Analysis of the Tikhonov regularization to retrieve thermal conductivity depth-profiles from infrared thermography data

    NASA Astrophysics Data System (ADS)

    Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo

    2010-09-01

    We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.

  1. Solutions for transients in arbitrarily branching cables: III. Voltage clamp problems.

    PubMed

    Major, G

    1993-07-01

    Branched cable voltage recording and voltage clamp analytical solutions derived in two previous papers are used to explore practical issues concerning voltage clamp. Single exponentials can be fitted reasonably well to the decay phase of clamped synaptic currents, although they contain many underlying components. The effective time constant depends on the fit interval. The smoothing effects on synaptic clamp currents of dendritic cables and series resistance are explored with a single cylinder + soma model, for inputs with different time courses. "Soma" and "cable" charging currents cannot be separated easily when the soma is much smaller than the dendrites. Subtractive soma capacitance compensation and series resistance compensation are discussed. In a hippocampal CA1 pyramidal neurone model, voltage control at most dendritic sites is extremely poor. Parameter dependencies are illustrated. The effects of series resistance compound those of dendritic cables and depend on the "effective capacitance" of the cell. Plausible combinations of parameters can cause order-of-magnitude distortions to clamp current waveform measures of simulated Schaeffer collateral inputs. These voltage clamp problems are unlikely to be solved by the use of switch clamp methods.

  2. A method for manufacturing superior set yogurt under reduced oxygen conditions.

    PubMed

    Horiuchi, H; Inoue, N; Liu, E; Fukui, M; Sasaki, Y; Sasaki, T

    2009-09-01

    The yogurt starters Lactobacillus delbrueckii ssp. bulgaricus and Streptococcus thermophilus are well-known facultatively anaerobic bacteria that can grow in oxygenated environments. We found that they removed dissolved oxygen (DO) in a yogurt mix as the fermentation progressed and that they began to produce acid actively after the DO concentration in the yogurt mix was reduced to 0 mg/kg, suggesting that the DO retarded the production of acid. Yogurt fermentation was carried out at 43 or 37 degrees C both after the DO reduction treatment and without prior treatment. Nitrogen gas was mixed and dispersed into the yogurt mix after inoculation with yogurt starter culture to reduce the DO concentration in the yogurt mix. The treatment that reduced DO concentration in the yogurt mix to approximately 0 mg/kg beforehand caused the starter culture LB81 used in this study to enter into the exponential growth phase earlier. Furthermore, the combination of reduced DO concentration in the yogurt mix beforehand and incubation at a lower temperature (37 degrees C) resulted in a superior set yogurt with a smooth texture and strong curd structure.

  3. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  4. The Halo mass function from Excursion Set Theory. II. The Diffusing Barrier

    NASA Astrophysics Data System (ADS)

    Maggiore, Michele; Riotto, Antonio

    2010-07-01

    In excursion set theory, the computation of the halo mass function is mapped into a first-passage time process in the presence of a barrier, which in the spherical collapse model is a constant and in the ellipsoidal collapse model is a fixed function of the variance of the smoothed density field. However, N-body simulations show that dark matter halos grow through a mixture of smooth accretion, violent encounters, and fragmentations, and modeling halo collapse as spherical, or even as ellipsoidal, is a significant oversimplification. In addition, the very definition of what is a dark matter halo, both in N-body simulations and observationally, is a difficult problem. We propose that some of the physical complications inherent to a realistic description of halo formation can be included in the excursion set theory framework, at least at an effective level, by taking into account that the critical value for collapse is not a fixed constant δ c , as in the spherical collapse model, nor a fixed function of the variance σ of the smoothed density field, as in the ellipsoidal collapse model, but rather is itself a stochastic variable, whose scatter reflects a number of complicated aspects of the underlying dynamics. Solving the first-passage time problem in the presence of a diffusing barrier we find that the exponential factor in the Press-Schechter mass function changes from exp{-δ2 c /2σ2} to exp{-aδ2 c /2σ2}, where a = 1/(1 + DB ) and DB is the diffusion coefficient of the barrier. The numerical value of DB , and therefore the corresponding value of a, depends among other things on the algorithm used for identifying halos. We discuss the physical origin of the stochasticity of the barrier and, from recent N-body simulations that studied the properties of the collapse barrier, we deduce a value DB ~= 0.25. Our model then predicts a ~= 0.80, in excellent agreement with the exponential fall off of the mass function found in N-body simulations, for the same halo definition. Combining this result with the non-Markovian corrections computed in Paper I of this series, we derive an analytic expression for the halo mass function for Gaussian fluctuations and we compare it with N-body simulations.

  5. Global Patch Matching

    NASA Astrophysics Data System (ADS)

    Huang, X.; Hu, K.; Ling, X.; Zhang, Y.; Lu, Z.; Zhou, G.

    2017-09-01

    This paper introduces a novel global patch matching method that focuses on how to remove fronto-parallel bias and obtain continuous smooth surfaces with assuming that the scenes covered by stereos are piecewise continuous. Firstly, simple linear iterative cluster method (SLIC) is used to segment the base image into a series of patches. Then, a global energy function, which consists of a data term and a smoothness term, is built on the patches. The data term is the second-order Taylor expansion of correlation coefficients, and the smoothness term is built by combing connectivity constraints and the coplanarity constraints are combined to construct the smoothness term. Finally, the global energy function can be built by combining the data term and the smoothness term. We rewrite the global energy function in a quadratic matrix function, and use least square methods to obtain the optimal solution. Experiments on Adirondack stereo and Motorcycle stereo of Middlebury benchmark show that the proposed method can remove fronto-parallel bias effectively, and produce continuous smooth surfaces.

  6. A new parametric method to smooth time-series data of metabolites in metabolic networks.

    PubMed

    Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide

    2016-12-01

    Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  8. The Analysis of Fluorescence Decay by a Method of Moments

    PubMed Central

    Isenberg, Irvin; Dyson, Robert D.

    1969-01-01

    The fluorescence decay of the excited state of most biopolymers, and biopolymer conjugates and complexes, is not, in general, a simple exponential. The method of moments is used to establish a means of analyzing such multi-exponential decays. The method is tested by the use of computer simulated data, assuming that the limiting error is determined by noise generated by a pseudorandom number generator. Multi-exponential systems with relatively closely spaced decay constants may be successfully analyzed. The analyses show the requirements, in terms of precision, that data must meet. The results may be used both as an aid in the design of equipment and in the analysis of data subsequently obtained. PMID:5353139

  9. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  10. Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods

    NASA Astrophysics Data System (ADS)

    Hora, Heinrich; Aydin, Meral

    1992-04-01

    The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.

  11. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    NASA Astrophysics Data System (ADS)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  12. Statistical assessment of bi-exponential diffusion weighted imaging signal characteristics induced by intravoxel incoherent motion in malignant breast tumors

    PubMed Central

    Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.

    2016-01-01

    Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078

  13. Preparation of an exponentially rising optical pulse for efficient excitation of single atoms in free space.

    PubMed

    Dao, Hoang Lan; Aljunid, Syed Abdullah; Maslennikov, Gleb; Kurtsiefer, Christian

    2012-08-01

    We report on a simple method to prepare optical pulses with exponentially rising envelope on the time scale of a few ns. The scheme is based on the exponential transfer function of a fast transistor, which generates an exponentially rising envelope that is transferred first on a radio frequency carrier, and then on a coherent cw laser beam with an electro-optical phase modulator. The temporally shaped sideband is then extracted with an optical resonator and can be used to efficiently excite a single (87)Rb atom.

  14. Compact exponential product formulas and operator functional derivative

    NASA Astrophysics Data System (ADS)

    Suzuki, Masuo

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin-Specht-Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians.

  15. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  16. Automatic selection of arterial input function using tri-exponential models

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Chen, Jeremy; Castro, Marcelo; Thomasson, David

    2009-02-01

    Dynamic Contrast Enhanced MRI (DCE-MRI) is one method for drug and tumor assessment. Selecting a consistent arterial input function (AIF) is necessary to calculate tissue and tumor pharmacokinetic parameters in DCE-MRI. This paper presents an automatic and robust method to select the AIF. The first stage is artery detection and segmentation, where knowledge about artery structure and dynamic signal intensity temporal properties of DCE-MRI is employed. The second stage is AIF model fitting and selection. A tri-exponential model is fitted for every candidate AIF using the Levenberg-Marquardt method, and the best fitted AIF is selected. Our method has been applied in DCE-MRIs of four different body parts: breast, brain, liver and prostate. The success rates in artery segmentation for 19 cases are 89.6%+/-15.9%. The pharmacokinetic parameters computed from the automatically selected AIFs are highly correlated with those from manually determined AIFs (R2=0.946, P(T<=t)=0.09). Our imaging-based tri-exponential AIF model demonstrated significant improvement over a previously proposed bi-exponential model.

  17. Nonequilibrium flows with smooth particle applied mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kum, Oyeon

    1995-07-01

    Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separatelymore » controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.« less

  18. SSD with generalized phase modulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rothenberg, J.

    1996-01-09

    Smoothing by spectral dispersion (SSD) with standard frequency modulation (FM), although simple to implement, has the disadvantage that low spatial frequencies present in the spectrum of the target illumination are not smoothed as effectively as with a more general smoothing method (eg, induced spatial incoherence method). The reduced smoothing performance of standard FM-SSD can result in spectral power of the speckle noise at these low spatial frequencies as much as one order of magnitude larger than that achieved with a more general method. In fact, at small integration times FM-SSD has no smoothing effect at all for a broad bandmore » of low spatial frequencies. This effect may have important implications for both direct and indirect drive ICF.« less

  19. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  20. Student Support for Research in Hierarchical Control and Trajectory Planning

    NASA Technical Reports Server (NTRS)

    Martin, Clyde F.

    1999-01-01

    Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.

  1. Symmetric flows for compressible heat-conducting fluids with temperature dependent viscosity coefficients

    NASA Astrophysics Data System (ADS)

    Wan, Ling; Wang, Tao

    2017-06-01

    We consider the Navier-Stokes equations for compressible heat-conducting ideal polytropic gases in a bounded annular domain when the viscosity and thermal conductivity coefficients are general smooth functions of temperature. A global-in-time, spherically or cylindrically symmetric, classical solution to the initial boundary value problem is shown to exist uniquely and converge exponentially to the constant state as the time tends to infinity under certain assumptions on the initial data and the adiabatic exponent γ. The initial data can be large if γ is sufficiently close to 1. These results are of Nishida-Smoller type and extend the work (Liu et al. (2014) [16]) restricted to the one-dimensional flows.

  2. Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs.

    PubMed

    Scheerans, Christian; Derendorf, Hartmut; Kloft, Charlotte

    2008-04-01

    The area under the plasma concentration-time curve from time zero to infinity (AUC(0-inf)) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC(0-inf). The present investigation introduces a simple method, called the two times t(max) method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC(0-inf) estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC(0-inf) compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile.

  3. A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Bennun, Leonardo

    2017-07-01

    A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied

  4. Quantum mechanical generalized phase-shift approach to atom-surface scattering: a Feshbach projection approach to dealing with closed channel effects.

    PubMed

    Maji, Kaushik; Kouri, Donald J

    2011-03-28

    We have developed a new method for solving quantum dynamical scattering problems, using the time-independent Schrödinger equation (TISE), based on a novel method to generalize a "one-way" quantum mechanical wave equation, impose correct boundary conditions, and eliminate exponentially growing closed channel solutions. The approach is readily parallelized to achieve approximate N(2) scaling, where N is the number of coupled equations. The full two-way nature of the TISE is included while propagating the wave function in the scattering variable and the full S-matrix is obtained. The new algorithm is based on a "Modified Cayley" operator splitting approach, generalizing earlier work where the method was applied to the time-dependent Schrödinger equation. All scattering variable propagation approaches to solving the TISE involve solving a Helmholtz-type equation, and for more than one degree of freedom, these are notoriously ill-behaved, due to the unavoidable presence of exponentially growing contributions to the numerical solution. Traditionally, the method used to eliminate exponential growth has posed a major obstacle to the full parallelization of such propagation algorithms. We stabilize by using the Feshbach projection operator technique to remove all the nonphysical exponentially growing closed channels, while retaining all of the propagating open channel components, as well as exponentially decaying closed channel components.

  5. DNA electrophoresis in agarose gels: effects of field and gel concentration on the exponential dependence of reciprocal mobility on DNA length.

    PubMed

    Rill, Randolph L; Beheshti, Afshin; Van Winkle, David H

    2002-08-01

    Electrophoretic mobilities of DNA molecules ranging in length from 200 to 48 502 base pairs (bp) were measured in agarose gels with concentrations T = 0.5% to 1.3% at electric fields from E = 0.71 to 5.0 V/cm. This broad data set determines a range of conditions over which the new interpolation equation nu(L) = (beta+alpha(1+exp(-L/gamma))(-1) can be used to relate mobility to length with high accuracy. Mobility data were fit with chi(2) > 0.999 for all gel concentrations and fields ranging from 2.5 to 5 V/cm, and for lower fields at low gel concentrations. Analyses using so-called reptation plots (Rousseau, J., Drouin, G., Slater, G. W., Phys. Rev. Lett. 1997, 79, 1945-1948) indicate that this simple exponential relation is obeyed well when there is a smooth transition from the Ogston sieving regime to the reptation regime with increasing DNA length. Deviations from this equation occur when DNA migration is hindered, apparently by entropic-trapping, which is favored at low fields and high gel concentrations in the ranges examined.

  6. Exponential decline of aftershocks of the M7.9 1868 great Kau earthquake, Hawaii, through the 20th century

    USGS Publications Warehouse

    Klein, F.W.; Wright, Tim

    2008-01-01

    The remarkable catalog of Hawaiian earthquakes going back to the 1820s is based on missionary diaries, newspaper accounts, and instrumental records and spans the great M7.9 Kau earthquake of April 1868 and its aftershock sequence. The earthquake record since 1868 defines a smooth curve complete to M5.2 of the declining rate into the 21st century, after five short volcanic swarms are removed. A single aftershock curve fits the earthquake record, even with numerous M6 and 7 main shocks and eruptions. The timing of some moderate earthquakes may be controlled by magmatic stresses, but their overall long-term rate reflects one of aftershocks of the Kau earthquake. The 1868 earthquake is, therefore, the largest and most controlling stress event in the 19th and 20th centuries. We fit both the modified Omori (power law) and stretched exponential (SE) functions to the earthquakes. We found that the modified Omori law is a good fit to the M ??? 5.2 earthquake rate for the first 10 years or so and the more rapidly declining SE function fits better thereafter, as supported by three statistical tests. The switch to exponential decay suggests that a possible change in aftershock physics may occur from rate and state fault friction, with no change in the stress rate, to viscoelastic stress relaxation. The 61-year exponential decay constant is at the upper end of the range of geodetic relaxation times seen after other global earthquakes. Modeling deformation in Hawaii is beyond the scope of this paper, but a simple interpretation of the decay suggests an effective viscosity of 1019 to 1020 Pa s pertains in the volcanic spreading of Hawaii's flanks. The rapid decline in earthquake rate poses questions for seismic hazard estimates in an area that is cited as one of the most hazardous in the United States.

  7. Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.

    PubMed Central

    Yanagimoto, T; Kashiwagi, N

    1990-01-01

    A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512

  8. Theoretical analysis of exponential transversal method of lines for the diffusion equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, A.; Raydan, M.; Campo, A.

    1996-12-31

    Recently a new approximate technique to solve the diffusion equation was proposed by Campo and Salazar. This new method is inspired on the Method of Lines (MOL) with some insight coming from the method of separation of variables. The proposed method, the Exponential Transversal Method of Lines (ETMOL), utilizes an exponential variation to improve accuracy in the evaluation of the time derivative. Campo and Salazar have implemented this method in a wide range of heat/mass transfer applications and have obtained surprisingly good numerical results. In this paper, the authors study the theoretical properties of ETMOL in depth. In particular, consistency,more » stability and convergence are established in the framework of the heat/mass diffusion equation. In most practical applications the method presents a very reduced truncation error in time and its different versions are proven to be unconditionally stable in the Fourier sense. Convergence of the solutions is then established. The theory is corroborated by several analytical/numerical experiments.« less

  9. Wave attenuation in the shallows of San Francisco Bay

    USGS Publications Warehouse

    Lacy, Jessica R.; MacVean, Lissa J.

    2016-01-01

    Waves propagating over broad, gently-sloped shallows decrease in height due to frictional dissipation at the bed. We quantified wave-height evolution across 7 km of mudflat in San Pablo Bay (northern San Francisco Bay), an environment where tidal mixing prevents the formation of fluid mud. Wave height was measured along a cross shore transect (elevation range−2mto+0.45mMLLW) in winter 2011 and summer 2012. Wave height decreased more than 50% across the transect. The exponential decay coefficient λ was inversely related to depth squared (λ=6×10−4h−2). The physical roughness length scale kb, estimated from near-bed turbulence measurements, was 3.5×10−3 m in winter and 1.1×10−2 m in summer. Estimated wave friction factor fw determined from wave-height data suggests that bottom friction dominates dissipation at high Rew but not at low Rew. Predictions of near-shore wave height based on offshore wave height and a rough formulation for fw were quite accurate, with errors about half as great as those based on the smooth formulation for fw. Researchers often assume that the wave boundary layer is smooth for settings with fine-grained sediments. At this site, use of a smooth fw results in an underestimate of wave shear stress by a factor of 2 for typical waves and as much as 5 for more energetic waves. It also inadequately captures the effectiveness of the mudflats in protecting the shoreline through wave attenuation.

  10. Infantile Nystagmus and Abnormalities of Conjugate Eye Movements in Down Syndrome.

    PubMed

    Weiss, Avery H; Kelly, John P; Phillips, James O

    2016-03-01

    Subjects with Down syndrome (DS) have an anatomical defect within the cerebellum that may impact downstream oculomotor areas. This study characterized gaze holding and gains for smooth pursuit, saccades, and optokinetic nystagmus (OKN) in DS children with infantile nystagmus (IN). Clinical data of 18 DS children with IN were reviewed retrospectively. Subjects with constant strabismus were excluded to remove any contribution of latent nystagmus. Gaze-holding, horizontal and vertical saccades to target steps, horizontal smooth pursuit of drifting targets, OKN in response to vertically or horizontally-oriented square wave gratings drifted at 15°/s, 30°/s, and 45°/s were recorded using binocular video-oculography. Seven subjects had additional optical coherence tomography imaging. Infantile nystagmus was associated with one or more gaze-holding instabilities (GHI) in each subject. The majority of subjects had a combination of conjugate horizontal jerk with constant or exponential slow-phase velocity, asymmetric or symmetric, and either monocular or binocular pendular nystagmus. Six of seven subjects had mild (Grade 0-1) persistence of retinal layers overlying the fovea, similar to that reported in DS children without nystagmus. All subjects had abnormal gains across one or more stimulus conditions (horizontal smooth pursuit, saccades, or OKN). Saccade velocities followed the main sequence. Down syndrome subjects with IN show a wide range of GHI and abnormalities of conjugate eye movements. We propose that these ocular motor abnormalities result from functional abnormalities of the cerebellum and/or downstream oculomotor circuits, perhaps due to extensive miswiring.

  11. Compact exponential product formulas and operator functional derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, M.

    1997-02-01

    A new scheme for deriving compact expressions of the logarithm of the exponential product is proposed and it is applied to several exponential product formulas. A generalization of the Dynkin{endash}Specht{endash}Wever (DSW) theorem on free Lie elements is given, and it is used to study the relation between the traditional method (based on the DSW theorem) and the present new scheme. The concept of the operator functional derivative is also proposed, and it is applied to ordered exponentials, such as time-evolution operators for time-dependent Hamiltonians. {copyright} {ital 1997 American Institute of Physics.}

  12. Exponential propagators for the Schrödinger equation with a time-dependent potential.

    PubMed

    Bader, Philipp; Blanes, Sergio; Kopylov, Nikita

    2018-06-28

    We consider the numerical integration of the Schrödinger equation with a time-dependent Hamiltonian given as the sum of the kinetic energy and a time-dependent potential. Commutator-free (CF) propagators are exponential propagators that have shown to be highly efficient for general time-dependent Hamiltonians. We propose new CF propagators that are tailored for Hamiltonians of the said structure, showing a considerably improved performance. We obtain new fourth- and sixth-order CF propagators as well as a novel sixth-order propagator that incorporates a double commutator that only depends on coordinates, so this term can be considered as cost-free. The algorithms require the computation of the action of exponentials on a vector similar to the well-known exponential midpoint propagator, and this is carried out using the Lanczos method. We illustrate the performance of the new methods on several numerical examples.

  13. Exponential convergence through linear finite element discretization of stratified subdomains

    NASA Astrophysics Data System (ADS)

    Guddati, Murthy N.; Druskin, Vladimir; Vaziri Astaneh, Ali

    2016-10-01

    Motivated by problems where the response is needed at select localized regions in a large computational domain, we devise a novel finite element discretization that results in exponential convergence at pre-selected points. The key features of the discretization are (a) use of midpoint integration to evaluate the contribution matrices, and (b) an unconventional mapping of the mesh into complex space. Named complex-length finite element method (CFEM), the technique is linked to Padé approximants that provide exponential convergence of the Dirichlet-to-Neumann maps and thus the solution at specified points in the domain. Exponential convergence facilitates drastic reduction in the number of elements. This, combined with sparse computation associated with linear finite elements, results in significant reduction in the computational cost. The paper presents the basic ideas of the method as well as illustration of its effectiveness for a variety of problems involving Laplace, Helmholtz and elastodynamics equations.

  14. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  15. Time Domain Version of the Uniform Geometrical Theory of Diffraction

    NASA Astrophysics Data System (ADS)

    Rousseau, Paul R.

    1995-01-01

    A time domain (TD) version of the uniform geometrical theory of diffraction which is referred to as the TD-UTD is developed to analyze the transient electromagnetic scattering from perfectly conducting objects that are large in terms of pulse width. In particular, the scattering from a perfectly conducting arbitrary curved wedge and an arbitrary smooth convex surface are treated in detail. Note that the canonical geometries of a circular cylinder and a sphere are special cases of the arbitrary smooth convex surface. These TD -UTD solutions are obtained in the form of relatively simple analytical expressions valid for early to intermediate times. The geometries treated here can be used to build up a transient solution to more complex radiating objects via space-time localization, in exactly the same way as is done by invoking spatial localization properties in the frequency domain UTD. The TD-UTD provides the response due to an excitation of a general astigmatic impulsive wavefront with any polarization. This generalized impulse response may then be convolved with other excitation time pulses, to find even more general solutions due to other excitation pulses. Since the TD-UTD uses the same rays as the frequency domain UTD, it provides a simple picture for transient radiation or scattering and is therefore just as physically appealing as the frequency domain UTD. The formulation of an analytic time transform (ATT), which produces an analytic time signal given a frequency response function, is given here. This ATT is used because it provides a very efficient method of inverting the asymptotic high frequency UTD representations to obtain the corresponding TD-UTD expressions even when there are special UTD transition functions which may not be well behaved at the low frequencies; also, using the ATT avoids the difficulties associated with the inversion of UTD ray fields that traverse line or smooth caustics. Another useful aspect of the ATT is the ability to perform an efficient convolution with a broad class of excitation pulse functions, where the frequency response of the excitation function must be expressed as a summation of complex exponential functions.

  16. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  17. Physical and numerical sources of computational inefficiency in integration of chemical kinetic rate equations: Etiology, treatment and prognosis

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1986-01-01

    The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.

  18. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors

    PubMed Central

    Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Peng, Junbiao

    2018-01-01

    A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously. PMID:29534492

  19. Periodic orbit spectrum in terms of Ruelle-Pollicott resonances

    NASA Astrophysics Data System (ADS)

    Leboeuf, P.

    2004-02-01

    Fully chaotic Hamiltonian systems possess an infinite number of classical solutions which are periodic, e.g., a trajectory “p” returns to its initial conditions after some fixed time τp. Our aim is to investigate the spectrum {τ1,τ2,…} of periods of the periodic orbits. An explicit formula for the density ρ(τ)=∑pδ(τ-τp) is derived in terms of the eigenvalues of the classical evolution operator. The density is naturally decomposed into a smooth part plus an interferent sum over oscillatory terms. The frequencies of the oscillatory terms are given by the imaginary part of the complex eigenvalues (Ruelle-Pollicott resonances). For large periods, corrections to the well-known exponential growth of the smooth part of the density are obtained. An alternative formula for ρ(τ) in terms of the zeros and poles of the Ruelle ζ function is also discussed. The results are illustrated with the geodesic motion in billiards of constant negative curvature. Connections with the statistical properties of the corresponding quantum eigenvalues, random-matrix theory, and discrete maps are also considered. In particular, a random-matrix conjecture is proposed for the eigenvalues of the classical evolution operator of chaotic billiards.

  20. SMOOTHING ROTATION CURVES AND MASS PROFILES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berrier, Joel C.; Sellwood, J. A.

    2015-02-01

    We show that spiral activity can erase pronounced features in disk galaxy rotation curves. We present simulations of growing disks, in which the added material has a physically motivated distribution, as well as other examples of physically less realistic accretion. In all cases, attempts to create unrealistic rotation curves were unsuccessful because spiral activity rapidly smoothed away features in the disk mass profile. The added material was redistributed radially by the spiral activity, which was itself provoked by the density feature. In the case of a ridge-like feature in the surface density profile, we show that two unstable spiral modesmore » develop, and the associated angular momentum changes in horseshoe orbits remove particles from the ridge and spread them both inward and outward. This process rapidly erases the density feature from the disk. We also find that the lack of a feature when transitioning from disk to halo dominance in the rotation curves of disk galaxies, the so called ''disk-halo conspiracy'', could also be accounted for by this mechanism. We do not create perfectly exponential mass profiles in the disk, but suggest that this mechanism contributes to their creation.« less

  1. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  2. Suspension system vibration analysis with regard to variable type ability to smooth road irregularities

    NASA Astrophysics Data System (ADS)

    Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.

    2018-03-01

    The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.

  3. Smoothing of climate time series revisited

    NASA Astrophysics Data System (ADS)

    Mann, Michael E.

    2008-08-01

    We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.

  4. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  5. Pavement smoothness indices : research brief.

    DOT National Transportation Integrated Search

    1998-08-01

    Many in the asphalt industry believe that initial pavement smoothness directly relates to : pavement life. Public perception of smoothness is also important. Oregon is interested in : determining the appropriate method of measurement to quantify smoo...

  6. Improving smoothing efficiency of rigid conformal polishing tool using time-dependent smoothing evaluation model

    NASA Astrophysics Data System (ADS)

    Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng

    2017-06-01

    A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o

  7. Creation of current filaments in the solar corona

    NASA Technical Reports Server (NTRS)

    Mikic, Z.; Schnack, D. D.; Van Hoven, G.

    1989-01-01

    It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.

  8. Generalized weighted likelihood density estimators with application to finite mixture of exponential family distributions

    PubMed Central

    Zhan, Tingting; Chevoneva, Inna; Iglewicz, Boris

    2010-01-01

    The family of weighted likelihood estimators largely overlaps with minimum divergence estimators. They are robust to data contaminations compared to MLE. We define the class of generalized weighted likelihood estimators (GWLE), provide its influence function and discuss the efficiency requirements. We introduce a new truncated cubic-inverse weight, which is both first and second order efficient and more robust than previously reported weights. We also discuss new ways of selecting the smoothing bandwidth and weighted starting values for the iterative algorithm. The advantage of the truncated cubic-inverse weight is illustrated in a simulation study of three-components normal mixtures model with large overlaps and heavy contaminations. A real data example is also provided. PMID:20835375

  9. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  10. A Modified Tri-Exponential Model for Multi-b-value Diffusion-Weighted Imaging: A Method to Detect the Strictly Diffusion-Limited Compartment in Brain

    PubMed Central

    Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao

    2018-01-01

    Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599

  11. Smoothing the Marmousi Model

    NASA Astrophysics Data System (ADS)

    Žáček, K.

    Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.

  12. Forecasting Container Throughput at the Doraleh Port in Djibouti through Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Mohamed Ismael, Hawa; Vandyck, George Kobina

    The Doraleh Container Terminal (DCT) located in Djibouti has been noted as the most technologically advanced container terminal on the African continent. DCT's strategic location at the crossroads of the main shipping lanes connecting Asia, Africa and Europe put it in a unique position to provide important shipping services to vessels plying that route. This paper aims to forecast container throughput through the Doraleh Container Port in Djibouti by Time Series Analysis. A selection of univariate forecasting models has been used, namely Triple Exponential Smoothing Model, Grey Model and Linear Regression Model. By utilizing the above three models and their combination, the forecast of container throughput through the Doraleh port was realized. A comparison of the different forecasting results of the three models, in addition to the combination forecast is then undertaken, based on commonly used evaluation criteria Mean Absolute Deviation (MAD) and Mean Absolute Percentage Error (MAPE). The study found that the Linear Regression forecasting Model was the best prediction method for forecasting the container throughput, since its forecast error was the least. Based on the regression model, a ten (10) year forecast for container throughput at DCT has been made.

  13. Bessel Fourier orientation reconstruction: an analytical EAP reconstruction using multiple shell acquisitions in diffusion MRI.

    PubMed

    Hosseinbor, Ameer Pasha; Chung, Moo K; Wu, Yu-Chien; Alexander, Andrew L

    2011-01-01

    The estimation of the ensemble average propagator (EAP) directly from q-space DWI signals is an open problem in diffusion MRI. Diffusion spectrum imaging (DSI) is one common technique to compute the EAP directly from the diffusion signal, but it is burdened by the large sampling required. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed. One, in particular, is Diffusion Propagator Imaging (DPI) which is based on the Laplace's equation estimation of diffusion signal for each shell acquisition. Viewed intuitively in terms of the heat equation, the DPI solution is obtained when the heat distribution between temperatuere measurements at each shell is at steady state. We propose a generalized extension of DPI, Bessel Fourier Orientation Reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition. That is, the heat distribution between shell measurements is no longer at steady state. In addition to being analytical, the BFOR solution also includes an intrinsic exponential smootheing term. We illustrate the effectiveness of the proposed method by showing results on both synthetic and real MR datasets.

  14. Time-splitting combined with exponential wave integrator fourier pseudospectral method for Schrödinger-Boussinesq system

    NASA Astrophysics Data System (ADS)

    Liao, Feng; Zhang, Luming; Wang, Shanshan

    2018-02-01

    In this article, we formulate an efficient and accurate numerical method for approximations of the coupled Schrödinger-Boussinesq (SBq) system. The main features of our method are based on: (i) the applications of a time-splitting Fourier spectral method for Schrödinger-like equation in SBq system, (ii) the utilizations of exponential wave integrator Fourier pseudospectral for spatial derivatives in the Boussinesq-like equation. The scheme is fully explicit and efficient due to fast Fourier transform. The numerical examples are presented to show the efficiency and accuracy of our method.

  15. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  16. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  17. Determining the Parameters of Fractional Exponential Hereditary Kernels for Nonlinear Viscoelastic Materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.

    2013-03-01

    The parameters of fractional-exponential hereditary kernels for nonlinear viscoelastic materials are determined. Methods for determining the parameters used in the third-order theory of viscoelasticity and in nonlinear theories based on the similarity of primary creep curves and the similarity of isochronous creep curves are analyzed. The parameters of fractional-exponential hereditary kernels are determined and tested against experimental data for microplastic, TC-8/3-250 glass-reinforced plastics, SVAM glass-reinforced plastics. The results (tables and plots) are analyzed

  18. Adaptive smoothing based on Gaussian processes regression increases the sensitivity and specificity of fMRI data.

    PubMed

    Strappini, Francesca; Gilboa, Elad; Pitzalis, Sabrina; Kay, Kendrick; McAvoy, Mark; Nehorai, Arye; Snyder, Abraham Z

    2017-03-01

    Temporal and spatial filtering of fMRI data is often used to improve statistical power. However, conventional methods, such as smoothing with fixed-width Gaussian filters, remove fine-scale structure in the data, necessitating a tradeoff between sensitivity and specificity. Specifically, smoothing may increase sensitivity (reduce noise and increase statistical power) but at the cost loss of specificity in that fine-scale structure in neural activity patterns is lost. Here, we propose an alternative smoothing method based on Gaussian processes (GP) regression for single subjects fMRI experiments. This method adapts the level of smoothing on a voxel by voxel basis according to the characteristics of the local neural activity patterns. GP-based fMRI analysis has been heretofore impractical owing to computational demands. Here, we demonstrate a new implementation of GP that makes it possible to handle the massive data dimensionality of the typical fMRI experiment. We demonstrate how GP can be used as a drop-in replacement to conventional preprocessing steps for temporal and spatial smoothing in a standard fMRI pipeline. We present simulated and experimental results that show the increased sensitivity and specificity compared to conventional smoothing strategies. Hum Brain Mapp 38:1438-1459, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Fast and accurate fitting and filtering of noisy exponentials in Legendre space.

    PubMed

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters.

  20. Measurement of cellular copper levels in Bacillus megaterium during exponential growth and sporulation.

    PubMed

    Krueger, W B; Kolodziej, B J

    1976-01-01

    Both atomic absorption spectrophotometry (AAS) and neutron activation analysis have been utilized to determine cellular Cu levels in Bacillus megaterium ATCC 19213. Both methods were selected for their sensitivity to detection of nanogram quantities of Cu. Data from both methods demonstrated identical patterms of Cu uptake during exponenetial growth and sporulation. Late exponential phase cells contained less Cu than postexponential t2 cells while t5 cells contained amounts equivalent to exponential cells. The t11 phase-bright forespore containing cells had a higher Cu content than those of earlier time periods, and the free spores had the highest Cu content. Analysis of the culture medium by AAS corroborated these data by showing concomitant Cu uptake during exponential growth and into t2 postexponential phase of sporulation. From t2 to t4, Cu egressed from the cells followed by a secondary uptake during the maturation of phase-dark forespores into phase-bright forespores (t6--t9).

  1. Amplitude Scintillation due to Atmospheric Turbulence for the Deep Space Network Ka-Band Downlink

    NASA Technical Reports Server (NTRS)

    Ho, C.; Wheelon, A.

    2004-01-01

    Fast amplitude variations due to atmospheric scintillation are the main concerns for the Deep Space Network (DSN) Ka-band downlink under clear weather conditions. A theoretical study of the amplitude scintillation variances for a finite aperture antenna is presented. Amplitude variances for weak scattering scenarios are examined using turbulence theory to describe atmospheric irregularities. We first apply the Kolmogorov turbulent spectrum to a point receiver for three different turbulent profile models, especially for an exponential model varying with altitude. These analytic solutions then are extended to a receiver with a finite aperture antenna for the three profile models. Smoothing effects of antenna aperture are expressed by gain factors. A group of scaling factor relations is derived to show the dependences of amplitude variances on signal wavelength, antenna size, and elevation angle. Finally, we use these analytic solutions to estimate the scintillation intensity for a DSN Goldstone 34-m receiving station. We find that the (rms) amplitude fluctuation is 0.13 dB at 20-deg elevation angle for an exponential model, while the fluctuation is 0.05 dB at 90 deg. These results will aid us in telecommunication system design and signal-fading prediction. They also provide a theoretical basis for further comparison with other measurements at Ka-band.

  2. Scattering of plane evanescent waves by buried cylinders: Modeling the coupling to guided waves and resonances

    NASA Astrophysics Data System (ADS)

    Marston, Philip L.

    2003-04-01

    The coupling of sound to buried targets can be associated with acoustic evanescent waves when the sea bottom is smooth. To understand the excitation of guided waves on buried fluid cylinders and shells by acoustic evanescent waves and the associated target resonances, the two-dimensional partial wave series for the scattering is found for normal incidence in an unbounded medium. The shell formulation uses the simplifications of thin-shell dynamics. The expansion of the incident wave becomes a double summation with products of modified and ordinary Bessel functions [P. L. Marston, J. Acoust. Soc. Am. 111, 2378 (2002)]. Unlike the case of an ordinary incident wave, the counterpropagating partial waves of the same angular order have unequal magnitudes when the incident wave is evanescent. This is a consequence of the exponential dependence of the incident wave amplitude on depth. Some consequences of this imbalance of partial-wave amplitudes are given by modifying previous ray theory for the scattering [P. L. Marston and N. H. Sun, J. Acoust. Soc. Am. 97, 777-783 (1995)]. The exponential dependence of the scattering on the location of a scatterer was previously demonstrated in air [T. J. Matula and P. L. Marston, J. Acoust. Soc. Am. 93, 1192-1195 (1993)].

  3. A new generalized exponential rational function method to find exact special solutions for the resonance nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Ghanbari, Behzad; Inc, Mustafa

    2018-04-01

    The present paper suggests a novel technique to acquire exact solutions of nonlinear partial differential equations. The main idea of the method is to generalize the exponential rational function method. In order to examine the ability of the method, we consider the resonant nonlinear Schrödinger equation (R-NLSE). Many variants of exact soliton solutions for the equation are derived by the proposed method. Physical interpretations of some obtained solutions is also included. One can easily conclude that the new proposed method is very efficient and finds the exact solutions of the equation in a relatively easy way.

  4. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  5. Bound-preserving modified exponential Runge-Kutta discontinuous Galerkin methods for scalar hyperbolic equations with stiff source terms

    NASA Astrophysics Data System (ADS)

    Huang, Juntao; Shu, Chi-Wang

    2018-05-01

    In this paper, we develop bound-preserving modified exponential Runge-Kutta (RK) discontinuous Galerkin (DG) schemes to solve scalar hyperbolic equations with stiff source terms by extending the idea in Zhang and Shu [43]. Exponential strong stability preserving (SSP) high order time discretizations are constructed and then modified to overcome the stiffness and preserve the bound of the numerical solutions. It is also straightforward to extend the method to two dimensions on rectangular and triangular meshes. Even though we only discuss the bound-preserving limiter for DG schemes, it can also be applied to high order finite volume schemes, such as weighted essentially non-oscillatory (WENO) finite volume schemes as well.

  6. Syndromic Surveillance Using Veterinary Laboratory Data: Algorithm Combination and Customization of Alerts

    PubMed Central

    Dórea, Fernanda C.; McEwen, Beverly J.; McNab, W. Bruce; Sanchez, Javier; Revie, Crawford W.

    2013-01-01

    Background Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. Methods This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. Results The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. Conclusion The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes. PMID:24349216

  7. Rigid shape matching by segmentation averaging.

    PubMed

    Wang, Hongzhi; Oliensis, John

    2010-04-01

    We use segmentations to match images by shape. The new matching technique does not require point-to-point edge correspondence and is robust to small shape variations and spatial shifts. To address the unreliability of segmentations computed bottom-up, we give a closed form approximation to an average over all segmentations. Our method has many extensions, yielding new algorithms for tracking, object detection, segmentation, and edge-preserving smoothing. For segmentation, instead of a maximum a posteriori approach, we compute the "central" segmentation minimizing the average distance to all segmentations of an image. For smoothing, instead of smoothing images based on local structures, we smooth based on the global optimal image structures. Our methods for segmentation, smoothing, and object detection perform competitively, and we also show promising results in shape-based tracking.

  8. A comparative assessment of preclinical chemotherapeutic response of tumors using quantitative non-Gaussian diffusion MRI

    PubMed Central

    Xu, Junzhong; Li, Ke; Smith, R. Adam; Waterton, John C.; Zhao, Ping; Ding, Zhaohua; Does, Mark D.; Manning, H. Charles; Gore, John C.

    2016-01-01

    Background Diffusion-weighted MRI (DWI) signal attenuation is often not mono-exponential (i.e. non-Gaussian diffusion) with stronger diffusion weighting. Several non-Gaussian diffusion models have been developed and may provide new information or higher sensitivity compared with the conventional apparent diffusion coefficient (ADC) method. However the relative merits of these models to detect tumor therapeutic response is not fully clear. Methods Conventional ADC, and three widely-used non-Gaussian models, (bi-exponential, stretched exponential, and statistical model), were implemented and compared for assessing SW620 human colon cancer xenografts responding to barasertib, an agent known to induce apoptosis via polyploidy. Bayesian Information Criterion (BIC) was used for model selection among all three non-Gaussian models. Results All of tumor volume, histology, conventional ADC, and three non-Gaussian DWI models could show significant differences between control and treatment groups after four days of treatment. However, only the non-Gaussian models detected significant changes after two days of treatment. For any treatment or control group, over 65.7% of tumor voxels indicate the bi-exponential model is strongly or very strongly preferred. Conclusion Non-Gaussian DWI model-derived biomarkers are capable of detecting tumor earlier chemotherapeutic response of tumors compared with conventional ADC and tumor volume. The bi-exponential model provides better fitting compared with statistical and stretched exponential models for the tumor and treatment models used in the current work. PMID:27919785

  9. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  10. Alternative Attitude Commanding and Control for Precise Spacecraft Landing

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2004-01-01

    A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.

  11. Mutually beneficial relationship in optimization between search-space smoothing and stochastic search

    NASA Astrophysics Data System (ADS)

    Hasegawa, Manabu; Hiramatsu, Kotaro

    2013-10-01

    The effectiveness of the Metropolis algorithm (MA) (constant-temperature simulated annealing) in optimization by the method of search-space smoothing (SSS) (potential smoothing) is studied on two types of random traveling salesman problems. The optimization mechanism of this hybrid approach (MASSS) is investigated by analyzing the exploration dynamics observed in the rugged landscape of the cost function (energy surface). The results show that the MA can be successfully utilized as a local search algorithm in the SSS approach. It is also clarified that the optimization characteristics of these two constituent methods are improved in a mutually beneficial manner in the MASSS run. Specifically, the relaxation dynamics generated by employing the MA work effectively even in a smoothed landscape and more advantage is taken of the guiding function proposed in the idea of SSS; this mechanism operates in an adaptive manner in the de-smoothing process and therefore the MASSS method maintains its optimization function over a wider temperature range than the MA.

  12. Multigrid methods for isogeometric discretization

    PubMed Central

    Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.

    2013-01-01

    We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168

  13. Exponential Methods for the Time Integration of Schroedinger Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cano, B.; Gonzalez-Pachon, A.

    2010-09-30

    We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.

  14. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  15. Smoothness of In vivo Spectral Baseline Determined by Mean Squared Error

    PubMed Central

    Zhang, Yan; Shen, Jun

    2013-01-01

    Purpose A nonparametric smooth line is usually added to spectral model to account for background signals in vivo magnetic resonance spectroscopy (MRS). The assumed smoothness of the baseline significantly influences quantitative spectral fitting. In this paper, a method is proposed to minimize baseline influences on estimated spectral parameters. Methods In this paper, the non-parametric baseline function with a given smoothness was treated as a function of spectral parameters. Its uncertainty was measured by root-mean-squared error (RMSE). The proposed method was demonstrated with a simulated spectrum and in vivo spectra of both short echo time (TE) and averaged echo times. The estimated in vivo baselines were compared with the metabolite-nulled spectra, and the LCModel-estimated baselines. The accuracies of estimated baseline and metabolite concentrations were further verified by cross-validation. Results An optimal smoothness condition was found that led to the minimal baseline RMSE. In this condition, the best fit was balanced against minimal baseline influences on metabolite concentration estimates. Conclusion Baseline RMSE can be used to indicate estimated baseline uncertainties and serve as the criterion for determining the baseline smoothness of in vivo MRS. PMID:24259436

  16. Predicting Academic Library Circulations: A Forecasting Methods Competition.

    ERIC Educational Resources Information Center

    Brooks, Terrence A.; Forys, John W., Jr.

    Based on sample data representing five years of monthly circulation totals from 50 academic libraries in Illinois, Iowa, Michigan, Minnesota, Missouri, and Ohio, a study was conducted to determine the most efficient smoothing forecasting methods for academic libraries. Smoothing forecasting methods were chosen because they have been characterized…

  17. A method for smoothing segmented lung boundary in chest CT images

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Hong, Helen

    2007-03-01

    To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.

  18. Characterization of radiation belt electron energy spectra from CRRES observations

    NASA Astrophysics Data System (ADS)

    Johnston, W. R.; Lindstrom, C. D.; Ginet, G. P.

    2010-12-01

    Energetic electrons in the outer radiation belt and the slot region exhibit a wide variety of energy spectral forms, more so than radiation belt protons. We characterize the spatial and temporal dependence of these forms using observations from the CRRES satellite Medium Electron Sensor A (MEA) and High-Energy Electron Fluxmeter (HEEF) instruments, together covering an energy range 0.15-8 MeV. Spectra were classified with two independent methods, data clustering and curve-fitting analyses, in each case defining categories represented by power law, exponential, and bump-on-tail (BOT) or other complex shapes. Both methods yielded similar results, with BOT, exponential, and power law spectra respectively dominating in the slot region, outer belt, and regions just beyond the outer belt. The transition from exponential to power law spectra occurs at higher L for lower magnetic latitude. The location of the transition from exponential to BOT spectra is highly correlated with the location of the plasmapause. In the slot region during the days following storm events, electron spectra were observed to evolve from exponential to BOT yielding differential flux minima at 350-650 keV and maxima at 1.5-2 MeV; such evolution has been attributed to energy-dependent losses from scattering by whistler hiss.

  19. Adjusting for overdispersion in piecewise exponential regression models to estimate excess mortality rate in population-based research.

    PubMed

    Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard

    2016-10-01

    In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value <0.001). However, the flexible piecewise exponential model showed the smallest overdispersion parameter (3.2 versus 21.3) for non-flexible piecewise exponential models. We showed that there were no major differences between methods. However, using a flexible piecewise regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.

  20. Introduction to multigrid methods

    NASA Technical Reports Server (NTRS)

    Wesseling, P.

    1995-01-01

    These notes were written for an introductory course on the application of multigrid methods to elliptic and hyperbolic partial differential equations for engineers, physicists and applied mathematicians. The use of more advanced mathematical tools, such as functional analysis, is avoided. The course is intended to be accessible to a wide audience of users of computational methods. We restrict ourselves to finite volume and finite difference discretization. The basic principles are given. Smoothing methods and Fourier smoothing analysis are reviewed. The fundamental multigrid algorithm is studied. The smoothing and coarse grid approximation properties are discussed. Multigrid schedules and structured programming of multigrid algorithms are treated. Robustness and efficiency are considered.

  1. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  2. A method for reducing sampling jitter in digital control systems

    NASA Technical Reports Server (NTRS)

    Anderson, T. O.; HURBD W. J.; Hurd, W. J.

    1969-01-01

    Digital phase lock loop system is designed by smoothing the proportional control with a low pass filter. This method does not significantly affect the loop dynamics when the smoothing filter bandwidth is wide compared to loop bandwidth.

  3. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  4. Line transect estimation of population size: the exponential case with grouped data

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1979-01-01

    Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.

  5. A method for nonlinear exponential regression analysis

    NASA Technical Reports Server (NTRS)

    Junkin, B. G.

    1971-01-01

    A computer-oriented technique is presented for performing a nonlinear exponential regression analysis on decay-type experimental data. The technique involves the least squares procedure wherein the nonlinear problem is linearized by expansion in a Taylor series. A linear curve fitting procedure for determining the initial nominal estimates for the unknown exponential model parameters is included as an integral part of the technique. A correction matrix was derived and then applied to the nominal estimate to produce an improved set of model parameters. The solution cycle is repeated until some predetermined criterion is satisfied.

  6. Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model

    NASA Astrophysics Data System (ADS)

    Al Sobhi, Mashail M.

    2015-02-01

    Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.

  7. Theory of many-body localization in periodically driven systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abanin, Dmitry A., E-mail: dabanin@gmail.com; De Roeck, Wojciech; Huveneers, François

    We present a theory of periodically driven, many-body localized (MBL) systems. We argue that MBL persists under periodic driving at high enough driving frequency: The Floquet operator (evolution operator over one driving period) can be represented as an exponential of an effective time-independent Hamiltonian, which is a sum of quasi-local terms and is itself fully MBL. We derive this result by constructing a sequence of canonical transformations to remove the time-dependence from the original Hamiltonian. When the driving evolves smoothly in time, the theory can be sharpened by estimating the probability of adiabatic Landau–Zener transitions at many-body level crossings. Inmore » all cases, we argue that there is delocalization at sufficiently low frequency. We propose a phase diagram of driven MBL systems.« less

  8. Self-excitation of a nonlinear scalar field in a random medium

    PubMed Central

    Zeldovich, Ya. B.; Molchanov, S. A.; Ruzmaikin, A. A.; Sokoloff, D. D.

    1987-01-01

    We discuss the evolution in time of a scalar field under the influence of a random potential and diffusion. The cases of a short-correlation in time and of stationary potentials are considered. In a linear approximation and for sufficiently weak diffusion, the statistical moments of the field grow exponentially in time at growth rates that progressively increase with the order of the moment; this indicates the intermittent nature of the field. Nonlinearity halts this growth and in some cases can destroy the intermittency. However, in many nonlinear situations the intermittency is preserved: high, persistent peaks of the field exist against the background of a smooth field distribution. These widely spaced peaks may make a major contribution to the average characteristics of the field. PMID:16593872

  9. Three-dimensional Structure of the Milky Way Dust: Modeling of LAMOST Data

    NASA Astrophysics Data System (ADS)

    Li, Linlin; Shen, Shiyin; Hou, Jinliang; Yuan, Haibo; Xiang, Maosheng; Chen, Bingqiu; Huang, Yang; Liu, Xiaowei

    2018-05-01

    We present a three-dimensional modeling of the Milky Way dust distribution by fitting the value-added star catalog of the LAMOST spectral survey. The global dust distribution can be described by an exponential disk with a scale length of 3192 pc and a scale height of 103 pc. In this modeling, the Sun is located above the dust disk with a vertical distance of 23 pc. Besides the global smooth structure, two substructures around the solar position are also identified. The one located at 150° < l < 200° and ‑5° < b < ‑30° is consistent with the Gould Belt model of Gontcharov, and the other one located at 140° < l < 165° and 0° < b < 15° is associated with the Camelopardalis molecular clouds.

  10. VizieR Online Data Catalog: HARPS timeseries data for HD41248 (Jenkins+, 2014)

    NASA Astrophysics Data System (ADS)

    Jenkins, J. S.; Tuomi, M.

    2017-05-01

    We modeled the HARPS radial velocities of HD 42148 by adopting the analysis techniques and the statistical model applied in Tuomi et al. (2014, arXiv:1405.2016). This model contains Keplerian signals, a linear trend, a moving average component with exponential smoothing, and linear correlations with activity indices, namely, BIS, FWHM, and chromospheric activity S index. We applied our statistical model outlined above to the full data set of radial velocities for HD 41248, combining the previously published data in Jenkins et al. (2013ApJ...771...41J) with the newly published data in Santos et al. (2014, J/A+A/566/A35), giving rise to a total time series of 223 HARPS (Mayor et al. 2003Msngr.114...20M) velocities. (1 data file).

  11. A new formation control of multiple underactuated surface vessels

    NASA Astrophysics Data System (ADS)

    Xie, Wenjing; Ma, Baoli; Fernando, Tyrone; Iu, Herbert Ho-Ching

    2018-05-01

    This work investigates a new formation control problem of multiple underactuated surface vessels. The controller design is based on input-output linearisation technique, graph theory, consensus idea and some nonlinear tools. The proposed smooth time-varying distributed control law guarantees that the multiple underactuated surface vessels globally exponentially converge to some desired geometric shape, which is especially centred at the initial average position of vessels. Furthermore, the stability analysis of zero dynamics proves that the orientations of vessels tend to some constants that are dependent on the initial values of vessels, and the velocities and control inputs of the vessels decay to zero. All the results are obtained under the communication scenarios of static directed balanced graph with a spanning tree. Effectiveness of the proposed distributed control scheme is demonstrated using a simulation example.

  12. Radar correlated imaging for extended target by the combination of negative exponential restraint and total variation

    NASA Astrophysics Data System (ADS)

    Qian, Tingting; Wang, Lianlian; Lu, Guanghua

    2017-07-01

    Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.

  13. Circulating smooth muscle progenitor cells in atherosclerosis and plaque rupture: current perspective and methods of analysis.

    PubMed

    Bentzon, Jacob F; Falk, Erling

    2010-01-01

    Smooth muscle cells play a critical role in the development of atherosclerosis and its clinical complications. They were long thought to derive entirely from preexisting smooth muscle cells in the arterial wall, but this understanding has been challenged by the claim that circulating bone marrow-derived smooth muscle progenitor cells are an important source of plaque smooth muscle cells in human and experimental atherosclerosis. This theory is today accepted by many cardiovascular researchers and authors of contemporary review articles. Recently, however, we and others have refuted the existence of bone marrow-derived smooth muscle cells in animal models of atherosclerosis and other arterial diseases based on new experiments with high-resolution microscopy and improved techniques for smooth muscle cell identification and tracking. These studies have also pointed to a number of methodological deficiencies in some of the seminal papers in the field. For those unaccustomed with the methods used in this research area, it must be difficult to decide what to believe and why to do so. In this review, we summarize current knowledge about the origin of smooth muscle cells in atherosclerosis and direct the reader's attention to the methodological challenges that have contributed to the confusion in the field. 2009 Elsevier Inc. All rights reserved.

  14. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  15. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

  16. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods.

    PubMed

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared.

  17. Remarks on the general solution for the flat Friedmann universe with exponential scalar-field potential and dust

    NASA Astrophysics Data System (ADS)

    Andrianov, A. A.; Cannata, F.; Kamenshchik, A. Yu.

    2012-11-01

    We show that the simple extension of the method of obtaining the general exact solution for the cosmological model with the exponential scalar-field potential to the case when the dust is present fails, and we discuss the reasons of this puzzling phenomenon.

  18. Fast and Accurate Fitting and Filtering of Noisy Exponentials in Legendre Space

    PubMed Central

    Bao, Guobin; Schild, Detlev

    2014-01-01

    The parameters of experimentally obtained exponentials are usually found by least-squares fitting methods. Essentially, this is done by minimizing the mean squares sum of the differences between the data, most often a function of time, and a parameter-defined model function. Here we delineate a novel method where the noisy data are represented and analyzed in the space of Legendre polynomials. This is advantageous in several respects. First, parameter retrieval in the Legendre domain is typically two orders of magnitude faster than direct fitting in the time domain. Second, data fitting in a low-dimensional Legendre space yields estimates for amplitudes and time constants which are, on the average, more precise compared to least-squares-fitting with equal weights in the time domain. Third, the Legendre analysis of two exponentials gives satisfactory estimates in parameter ranges where least-squares-fitting in the time domain typically fails. Finally, filtering exponentials in the domain of Legendre polynomials leads to marked noise removal without the phase shift characteristic for conventional lowpass filters. PMID:24603904

  19. Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Jane, Archana P.; Pund, Mukesh A.

    2012-03-01

    The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.

  20. Mean Excess Function as a method of identifying sub-exponential tails: Application to extreme daily rainfall

    NASA Astrophysics Data System (ADS)

    Nerantzaki, Sofia; Papalexiou, Simon Michael

    2017-04-01

    Identifying precisely the distribution tail of a geophysical variable is tough, or, even impossible. First, the tail is the part of the distribution for which we have the less empirical information available; second, a universally accepted definition of tail does not and cannot exist; and third, a tail may change over time due to long-term changes. Unfortunately, the tail is the most important part of the distribution as it dictates the estimates of exceedance probabilities or return periods. Fortunately, based on their tail behavior, probability distributions can be generally categorized into two major families, i.e., sub-exponentials (heavy-tailed) and hyper-exponentials (light-tailed). This study aims to update the Mean Excess Function (MEF), providing a useful tool in order to asses which type of tail better describes empirical data. The MEF is based on the mean value of a variable over a threshold and results in a zero slope regression line when applied for the Exponential distribution. Here, we construct slope confidence intervals for the Exponential distribution as functions of sample size. The validation of the method using Monte Carlo techniques on four theoretical distributions covering major tail cases (Pareto type II, Log-normal, Weibull and Gamma) revealed that it performs well especially for large samples. Finally, the method is used to investigate the behavior of daily rainfall extremes; thousands of rainfall records were examined, from all over the world and with sample size over 100 years, revealing that heavy-tailed distributions can describe more accurately rainfall extremes.

  1. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2013-11-01

    Wildland fire propagation is studied in literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternative each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay and an infinite support, while the level-set method, which is a front tracking technique, generates a sharp function with a finite support. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random character that are extremely important in wildland fire propagation. As a consequence the fire front gets a random character, too. Hence a tracking method for random fronts is needed. In particular, the level-set contourn is here randomized accordingly to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterizing role proper to the level-set approach. The resulting model emerges to be suitable to simulate effects due to turbulent convection as fire flank and backing fire, the faster fire spread because of the actions by hot air pre-heating and by ember landing, and also the fire overcoming a firebreak zone that is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation it follows a correction for the rate of spread formula due to the mean jump-length of firebrands in the downwind direction for the leeward sector of the fireline contour.

  2. A Comparison of Methods for Nonparametric Estimation of Item Characteristic Curves for Binary Items

    ERIC Educational Resources Information Center

    Lee, Young-Sun

    2007-01-01

    This study compares the performance of three nonparametric item characteristic curve (ICC) estimation procedures: isotonic regression, smoothed isotonic regression, and kernel smoothing. Smoothed isotonic regression, employed along with an appropriate kernel function, provides better estimates and also satisfies the assumption of strict…

  3. Recursive least squares estimation and its application to shallow trench isolation

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Qin, S. Joe; Bode, Christopher A.; Purdy, Matthew A.

    2003-06-01

    In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.

  4. Liver fibrosis: stretched exponential model outperforms mono-exponential and bi-exponential models of diffusion-weighted MRI.

    PubMed

    Seo, Nieun; Chung, Yong Eun; Park, Yung Nyun; Kim, Eunju; Hwang, Jinwoo; Kim, Myeong-Jin

    2018-07-01

    To compare the ability of diffusion-weighted imaging (DWI) parameters acquired from three different models for the diagnosis of hepatic fibrosis (HF). Ninety-five patients underwent DWI using nine b values at 3 T magnetic resonance. The hepatic apparent diffusion coefficient (ADC) from a mono-exponential model, the true diffusion coefficient (D t ), pseudo-diffusion coefficient (D p ) and perfusion fraction (f) from a biexponential model, and the distributed diffusion coefficient (DDC) and intravoxel heterogeneity index (α) from a stretched exponential model were compared with the pathological HF stage. For the stretched exponential model, parameters were also obtained using a dataset of six b values (DDC # , α # ). The diagnostic performances of the parameters for HF staging were evaluated with Obuchowski measures and receiver operating characteristics (ROC) analysis. The measurement variability of DWI parameters was evaluated using the coefficient of variation (CoV). Diagnostic accuracy for HF staging was highest for DDC # (Obuchowski measures, 0.770 ± 0.03), and it was significantly higher than that of ADC (0.597 ± 0.05, p < 0.001), D t (0.575 ± 0.05, p < 0.001) and f (0.669 ± 0.04, p = 0.035). The parameters from stretched exponential DWI and D p showed higher areas under the ROC curve (AUCs) for determining significant fibrosis (≥F2) and cirrhosis (F = 4) than other parameters. However, D p showed significantly higher measurement variability (CoV, 74.6%) than DDC # (16.1%, p < 0.001) and α # (15.1%, p < 0.001). Stretched exponential DWI is a promising method for HF staging with good diagnostic performance and fewer b-value acquisitions, allowing shorter acquisition time. • Stretched exponential DWI provides a precise and accurate model for HF staging. • Stretched exponential DWI parameters are more reliable than D p from bi-exponential DWI model • Acquisition of six b values is sufficient to obtain accurate DDC and α.

  5. A Fourier method for the analysis of exponential decay curves.

    PubMed

    Provencher, S W

    1976-01-01

    A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.

  6. Kernel PLS Estimation of Single-trial Event-related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  7. Exponential gain of randomness certified by quantum contextuality

    NASA Astrophysics Data System (ADS)

    Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan

    2017-04-01

    We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.

  8. Beta-function B-spline smoothing on triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter

    2013-03-01

    In this work we investigate a novel family of Ck-smooth rational basis functions on triangulations for fitting, smoothing, and denoising geometric data. The introduced basis function is closely related to a recently introduced general method introduced in utilizing generalized expo-rational B-splines, which provides Ck-smooth convex resolutions of unity on very general disjoint partitions and overlapping covers of multidimensional domains with complex geometry. One of the major advantages of this new triangular construction is its locality with respect to the star-1 neighborhood of the vertex on which the said base is providing Hermite interpolation. This locality of the basis functions can be in turn utilized in adaptive methods, where, for instance a local refinement of the underlying triangular mesh affects only the refined domain, whereas, in other method one needs to investigate what changes are occurring outside of the refined domain. Both the triangular and the general smooth constructions have the potential to become a new versatile tool of Computer Aided Geometric Design (CAGD), Finite and Boundary Element Analysis (FEA/BEA) and Iso-geometric Analysis (IGA).

  9. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  10. A method for exponential propagation of large systems of stiff nonlinear differential equations

    NASA Technical Reports Server (NTRS)

    Friesner, Richard A.; Tuckerman, Laurette S.; Dornblaser, Bright C.; Russo, Thomas V.

    1989-01-01

    A new time integrator for large, stiff systems of linear and nonlinear coupled differential equations is described. For linear systems, the method consists of forming a small (5-15-term) Krylov space using the Jacobian of the system and carrying out exact exponential propagation within this space. Nonlinear corrections are incorporated via a convolution integral formalism; the integral is evaluated via approximate Krylov methods as well. Gains in efficiency ranging from factors of 2 to 30 are demonstrated for several test problems as compared to a forward Euler scheme and to the integration package LSODE.

  11. Phase retrieval in digital speckle pattern interferometry by use of a smoothed space-frequency distribution.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2003-12-10

    We evaluate the use of a smoothed space-frequency distribution (SSFD) to retrieve optical phase maps in digital speckle pattern interferometry (DSPI). The performance of this method is tested by use of computer-simulated DSPI fringes. Phase gradients are found along a pixel path from a single DSPI image, and the phase map is finally determined by integration. This technique does not need the application of a phase unwrapping algorithm or the introduction of carrier fringes in the interferometer. It is shown that a Wigner-Ville distribution with a smoothing Gaussian kernel gives more-accurate results than methods based on the continuous wavelet transform. We also discuss the influence of filtering on smoothing of the DSPI fringes and some additional limitations that emerge when this technique is applied. The performance of the SSFD method for processing experimental data is then illustrated.

  12. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  13. Determining Parameters of Fractional-Exponential Heredity Kernels of Nonlinear Viscoelastic Materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Pavlyuk, Ya. V.; Fernati, P. V.

    2017-07-01

    The problem of determining the parameters of fractional-exponential heredity kernels of nonlinear viscoelastic materials is solved. The methods for determining the parameters that are used in the cubic theory of viscoelasticity and the nonlinear theories based on the conditions of similarity of primary creep curves and isochronous creep diagrams are analyzed. The parameters of fractional-exponential heredity kernels are determined and experimentally validated for the oriented polypropylene, FM3001 and FM10001 nylon fibers, microplastics, TC 8/3-250 glass-reinforced plastic, SWAM glass-reinforced plastic, and contact molding glass-reinforced plastic.

  14. Exponential Speedup of Quantum Annealing by Inhomogeneous Driving of the Transverse Field

    NASA Astrophysics Data System (ADS)

    Susa, Yuki; Yamashiro, Yu; Yamamoto, Masayuki; Nishimori, Hidetoshi

    2018-02-01

    We show, for quantum annealing, that a certain type of inhomogeneous driving of the transverse field erases first-order quantum phase transitions in the p-body interacting mean-field-type model with and without longitudinal random field. Since a first-order phase transition poses a serious difficulty for quantum annealing (adiabatic quantum computing) due to the exponentially small energy gap, the removal of first-order transitions means an exponential speedup of the annealing process. The present method may serve as a simple protocol for the performance enhancement of quantum annealing, complementary to non-stoquastic Hamiltonians.

  15. A Lie algebraic condition for exponential stability of discrete hybrid systems and application to hybrid synchronization.

    PubMed

    Zhao, Shouwei

    2011-06-01

    A Lie algebraic condition for global exponential stability of linear discrete switched impulsive systems is presented in this paper. By considering a Lie algebra generated by all subsystem matrices and impulsive matrices, when not all of these matrices are Schur stable, we derive new criteria for global exponential stability of linear discrete switched impulsive systems. Moreover, simple sufficient conditions in terms of Lie algebra are established for the synchronization of nonlinear discrete systems using a hybrid switching and impulsive control. As an application, discrete chaotic system's synchronization is investigated by the proposed method.

  16. A new axial smoothing method based on elastic mapping

    NASA Astrophysics Data System (ADS)

    Yang, J.; Huang, S. C.; Lin, K. P.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C. K.; Phelps, M. E.

    1996-12-01

    New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency=0.5, 0.7, 1.0/spl times/Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.

  17. Investigation of hyperelastic models for nonlinear elastic behavior of demineralized and deproteinized bovine cortical femur bone.

    PubMed

    Hosseinzadeh, M; Ghoreishi, M; Narooei, K

    2016-06-01

    In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  19. Comparison of the interfacial energy and pre-exponential factor calculated from the induction time and metastable zone width data based on classical nucleation theory

    NASA Astrophysics Data System (ADS)

    Shiau, Lie-Ding

    2016-09-01

    The pre-exponential factor and interfacial energy obtained from the metastable zone width (MSZW) data using the integral method proposed by Shiau and Lu [1] are compared in this study with those obtained from the induction time data using the conventional method (ti ∝J-1) for three crystallization systems, including potassium sulfate in water in a 200 mL vessel, borax decahydrate in water in a 100 mL vessel and butyl paraben in ethanol in a 5 mL tube. The results indicate that the pre-exponential factor and interfacial energy calculated from the induction time data based on classical nucleation theory are consistent with those calculated from the MSZW data using the same detection technique for the studied systems.

  20. Global exponential stability of inertial memristor-based neural networks with time-varying delays and impulses.

    PubMed

    Zhang, Wei; Huang, Tingwen; He, Xing; Li, Chuandong

    2017-11-01

    In this study, we investigate the global exponential stability of inertial memristor-based neural networks with impulses and time-varying delays. We construct inertial memristor-based neural networks based on the characteristics of the inertial neural networks and memristor. Impulses with and without delays are considered when modeling the inertial neural networks simultaneously, which are of great practical significance in the current study. Some sufficient conditions are derived under the framework of the Lyapunov stability method, as well as an extended Halanay differential inequality and a new delay impulsive differential inequality, which depend on impulses with and without delays, in order to guarantee the global exponential stability of the inertial memristor-based neural networks. Finally, two numerical examples are provided to illustrate the efficiency of the proposed methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. On the analytical determination of relaxation modulus of viscoelastic materials by Prony's interpolation method

    NASA Technical Reports Server (NTRS)

    Rodriguez, Pedro I.

    1986-01-01

    A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.

  2. Chemical method for producing smooth surfaces on silicon wafers

    DOEpatents

    Yu, Conrad

    2003-01-01

    An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).

  3. Uniform-penalty inversion of multiexponential decay data. II. Data spacing, T(2) data, systemic data errors, and diagnostics.

    PubMed

    Borgia, G C; Brown, R J; Fantazzini, P

    2000-12-01

    The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.

  4. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  5. Global exponential stability of BAM neural networks with time-varying delays and diffusion terms

    NASA Astrophysics Data System (ADS)

    Wan, Li; Zhou, Qinghua

    2007-11-01

    The stability property of bidirectional associate memory (BAM) neural networks with time-varying delays and diffusion terms are considered. By using the method of variation parameter and inequality technique, the delay-independent sufficient conditions to guarantee the uniqueness and global exponential stability of the equilibrium solution of such networks are established.

  6. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  7. Dendrimers and methods of preparing same through proportionate branching

    DOEpatents

    Yu, Yihua; Yue, Xuyi

    2015-09-15

    The present invention provides for monodispersed dendrimers having a core, branches and periphery ends, wherein the number of branches increases exponentially from the core to the periphery end and the length of the branches increases exponentially from the periphery end to the core, thereby providing for attachment of chemical species at the periphery ends without exhibiting steric hindrance.

  8. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  9. Double-exponential decay of orientational correlations in semiflexible polyelectrolytes.

    PubMed

    Bačová, P; Košovan, P; Uhlík, F; Kuldová, J; Limpouchová, Z; Procházka, K

    2012-06-01

    In this paper we revisited the problem of persistence length of polyelectrolytes. We performed a series of Molecular Dynamics simulations using the Debye-Hückel approximation for electrostatics to test several equations which go beyond the classical description of Odijk, Skolnick and Fixman (OSF). The data confirm earlier observations that in the limit of large contour separations the decay of orientational correlations can be described by a single-exponential function and the decay length can be described by the OSF relation. However, at short countour separations the behaviour is more complex. Recent equations which introduce more complicated expressions and an additional length scale could describe the results very well on both the short and the long length scale. The equation of Manghi and Netz when used without adjustable parameters could capture the qualitative trend but deviated in a quantitative comparison. Better quantitative agreement within the estimated error could be obtained using three equations with one adjustable parameter: 1) the equation of Manghi and Netz; 2) the equation proposed by us in this paper; 3) the equation proposed by Cannavacciuolo and Pedersen. Two characteristic length scales can be identified in the data: the intrinsic or bare persistence length and the electrostatic persistence length. All three equations use a single parameter to describe a smooth crossover from the short-range behaviour dominated by the intrinsic stiffness of the chain to the long-range OSF-like behaviour.

  10. A long-term earthquake rate model for the central and eastern United States from smoothed seismicity

    USGS Publications Warehouse

    Moschetti, Morgan P.

    2015-01-01

    I present a long-term earthquake rate model for the central and eastern United States from adaptive smoothed seismicity. By employing pseudoprospective likelihood testing (L-test), I examined the effects of fixed and adaptive smoothing methods and the effects of catalog duration and composition on the ability of the models to forecast the spatial distribution of recent earthquakes. To stabilize the adaptive smoothing method for regions of low seismicity, I introduced minor modifications to the way that the adaptive smoothing distances are calculated. Across all smoothed seismicity models, the use of adaptive smoothing and the use of earthquakes from the recent part of the catalog optimizes the likelihood for tests with M≥2.7 and M≥4.0 earthquake catalogs. The smoothed seismicity models optimized by likelihood testing with M≥2.7 catalogs also produce the highest likelihood values for M≥4.0 likelihood testing, thus substantiating the hypothesis that the locations of moderate-size earthquakes can be forecast by the locations of smaller earthquakes. The likelihood test does not, however, maximize the fraction of earthquakes that are better forecast than a seismicity rate model with uniform rates in all cells. In this regard, fixed smoothing models perform better than adaptive smoothing models. The preferred model of this study is the adaptive smoothed seismicity model, based on its ability to maximize the joint likelihood of predicting the locations of recent small-to-moderate-size earthquakes across eastern North America. The preferred rate model delineates 12 regions where the annual rate of M≥5 earthquakes exceeds 2×10−3. Although these seismic regions have been previously recognized, the preferred forecasts are more spatially concentrated than the rates from fixed smoothed seismicity models, with rate increases of up to a factor of 10 near clusters of high seismic activity.

  11. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording

    PubMed Central

    Eliseyev, Andrey; Aksenova, Tetiana

    2016-01-01

    In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417

  12. Testing local anisotropy using the method of smoothed residuals I — methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appleby, Stephen; Shafieloo, Arman, E-mail: stephen.appleby@apctp.org, E-mail: arman@apctp.org

    2014-03-01

    We discuss some details regarding the method of smoothed residuals, which has recently been used to search for anisotropic signals in low-redshift distance measurements (Supernovae). In this short note we focus on some details regarding the implementation of the method, particularly the issue of effectively detecting signals in data that are inhomogeneously distributed on the sky. Using simulated data, we argue that the original method proposed in Colin et al. [1] will not detect spurious signals due to incomplete sky coverage, and that introducing additional Gaussian weighting to the statistic as in [2] can hinder its ability to detect amore » signal. Issues related to the width of the Gaussian smoothing are also discussed.« less

  13. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  14. Power law versus exponential state transition dynamics: application to sleep-wake architecture.

    PubMed

    Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T

    2010-12-02

    Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.

  15. Effects of phylogenetic reconstruction method on the robustness of species delimitation using single-locus data

    PubMed Central

    Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel

    2014-01-01

    Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses. PMID:25821577

  16. Effect of exponential density transition on self-focusing of q-Gaussian laser beam in collisionless plasma

    NASA Astrophysics Data System (ADS)

    Valkunde, Amol T.; Vhanmore, Bandopant D.; Urunkar, Trupti U.; Gavade, Kusum M.; Patil, Sandip D.; Takale, Mansing V.

    2018-05-01

    In this work, nonlinear aspects of a high intensity q-Gaussian laser beam propagating in collisionless plasma having upward density ramp of exponential profiles is studied. We have employed the nonlinearity in dielectric function of plasma by considering ponderomotive nonlinearity. The differential equation governing the dimensionless beam width parameter is achieved by using Wentzel-Kramers-Brillouin (WKB) and paraxial approximations and solved it numerically by using Runge-Kutta fourth order method. Effect of exponential density ramp profile on self-focusing of q-Gaussian laser beam for various values of q is systematically carried out and compared with results Gaussian laser beam propagating in collisionless plasma having uniform density. It is found that exponential plasma density ramp causes the laser beam to become more focused and gives reasonably interesting results.

  17. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  18. A Temporal Mining Framework for Classifying Un-Evenly Spaced Clinical Data: An Approach for Building Effective Clinical Decision-Making System.

    PubMed

    Jane, Nancy Yesudhas; Nehemiah, Khanna Harichandran; Arputharaj, Kannan

    2016-01-01

    Clinical time-series data acquired from electronic health records (EHR) are liable to temporal complexities such as irregular observations, missing values and time constrained attributes that make the knowledge discovery process challenging. This paper presents a temporal rough set induced neuro-fuzzy (TRiNF) mining framework that handles these complexities and builds an effective clinical decision-making system. TRiNF provides two functionalities namely temporal data acquisition (TDA) and temporal classification. In TDA, a time-series forecasting model is constructed by adopting an improved double exponential smoothing method. The forecasting model is used in missing value imputation and temporal pattern extraction. The relevant attributes are selected using a temporal pattern based rough set approach. In temporal classification, a classification model is built with the selected attributes using a temporal pattern induced neuro-fuzzy classifier. For experimentation, this work uses two clinical time series dataset of hepatitis and thrombosis patients. The experimental result shows that with the proposed TRiNF framework, there is a significant reduction in the error rate, thereby obtaining the classification accuracy on an average of 92.59% for hepatitis and 91.69% for thrombosis dataset. The obtained classification results prove the efficiency of the proposed framework in terms of its improved classification accuracy.

  19. The role of Hurst exponent on cold field electron emission from conducting materials: from electric field distribution to Fowler-Nordheim plots

    PubMed Central

    de Assis, T. A.

    2015-01-01

    This work considers the effects of the Hurst exponent (H) on the local electric field distribution and the slope of the Fowler-Nordheim (FN) plot when considering the cold field electron emission properties of rough Large-Area Conducting Field Emitter Surfaces (LACFESs). A LACFES is represented by a self-affine Weierstrass-Mandelbrot function in a given spatial direction. For 0.1 ≤ H < 0.5, the local electric field distribution exhibits two clear exponential regimes. Moreover, a scaling between the macroscopic current density () and the characteristic kernel current density (), , with an H-dependent exponent , has been found. This feature, which is less pronounced (but not absent) in the range where more smooth surfaces have been found (), is a consequence of the dependency between the area efficiency of emission of a LACFES and the macroscopic electric field, which is often neglected in the interpretation of cold field electron emission experiments. Considering the recent developments in orthodox field emission theory, we show that the exponent must be considered when calculating the slope characterization parameter (SCP) and thus provides a relevant method of more precisely extracting the characteristic field enhancement factor from the slope of the FN plot. PMID:26035290

  20. The effects of MgADP on cross-bridge kinetics: a laser flash photolysis study of guinea-pig smooth muscle.

    PubMed Central

    Nishiye, E; Somlyo, A V; Török, K; Somlyo, A P

    1993-01-01

    1. The effects of MgADP on cross-bridge kinetics were investigated using laser flash photolysis of caged ATP (P3-1(2-nitrophenyl) ethyladenosine 5'-triphosphate), in guinea-pig portal vein smooth muscle permeabilized with Staphylococcus aureus alpha-toxin. Isometric tension and in-phase stiffness transitions from rigor state were monitored upon photolysis of caged ATP. The estimated concentration of ATP released from caged ATP by high-pressure liquid chromatography (HPLC) was 1.3 mM. 2. The time course of relaxation initiated by photolysis of caged ATP in the absence of Ca2+ was well fitted during the initial 200 ms by two exponential functions with time constants of, respectively, tau 1 = 34 ms and tau 2 = 1.2 s and relative amplitudes of 0.14 and 0.86. Multiple exponential functions were needed to fit longer intervals; the half-time of the overall relaxation was 0.8 s. The second order rate constant for cross-bridge detachment by ATP, estimated from the rate of initial relaxation, was 0.4-2.3 x 10(4) M-1 s-1. 3. MgADP dose dependently reduced both the relative amplitude of the first component and the rate constant of the second component of relaxation. Conversely, treatment of muscles with apyrase, to deplete endogenous ADP, increased the relative amplitude of the first component. In the presence of MgADP, in-phase stiffness decreased during force maintenance, suggesting that the force per cross-bridge increased. The apparent dissociation constant (Kd) of MgADP for the cross-bridge binding site, estimated from its concentration-dependent effect on the relative amplitude of the first component, was 1.3 microM. This affinity is much higher than the previously reported values (50-300 microM for smooth muscle; 18-400 microM for skeletal muscle; 7-10 microM for cardiac muscle). It is possible that the high affinity reflects the properties of a state generated during the co-operative reattachment cycle, rather than that of the rigor bridge. 4. The rate constant of MgADP release from cross-bridges, estimated from its concentration-dependent effect on the rate constant of the second (tau 2) component, was 0.35-7.7 s-1. To the extent that reattachment of cross-bridges could slow relaxation even during the initial 200 ms, this rate constant may be an underestimate. 5. Inorganic phosphate (Pi, 30 mM) did not affect the rate of relaxation during the initial approximately 50 ms, but accelerated the slower phase of relaxation, consistent with a cyclic cross-bridge model in which Pi increases the proportion of cross-bridges in detached ('weakly bound') states.(ABSTRACT TRUNCATED AT 400 WORDS) Images Fig. 1 PMID:8487195

  1. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  2. Deformed exponentials and portfolio selection

    NASA Astrophysics Data System (ADS)

    Rodrigues, Ana Flávia P.; Guerreiro, Igor M.; Cavalcante, Charles Casimiro

    In this paper, we present a method for portfolio selection based on the consideration on deformed exponentials in order to generalize the methods based on the gaussianity of the returns in portfolio, such as the Markowitz model. The proposed method generalizes the idea of optimizing mean-variance and mean-divergence models and allows a more accurate behavior for situations where heavy-tails distributions are necessary to describe the returns in a given time instant, such as those observed in economic crises. Numerical results show the proposed method outperforms the Markowitz portfolio for the cumulated returns with a good convergence rate of the weights for the assets which are searched by means of a natural gradient algorithm.

  3. Small area population forecasting: some experience with British models.

    PubMed

    Openshaw, S; Van Der Knaap, G A

    1983-01-01

    This study is concerned with the evaluation of the various models including time-series forecasts, extrapolation, and projection procedures, that have been developed to prepare population forecasts for planning purposes. These models are evaluated using data for the Netherlands. "As part of a research project at the Erasmus University, space-time population data has been assembled in a geographically consistent way for the period 1950-1979. These population time series are of sufficient length for the first 20 years to be used to build models and then evaluate the performance of the model for the next 10 years. Some 154 different forecasting models for 832 municipalities have been evaluated. It would appear that the best forecasts are likely to be provided by either a Holt-Winters model, or a ratio-correction model, or a low order exponential-smoothing model." excerpt

  4. Automated, on-board terrain analysis for precision landings

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Hines, Glenn D.

    2006-01-01

    Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown.

  5. Applications of an exponential finite difference technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handschuh, R.F.; Keith, T.G. Jr.

    1988-07-01

    An exponential finite difference scheme first presented by Bhattacharya for one dimensional unsteady heat conduction problems in Cartesian coordinates was extended. The finite difference algorithm developed was used to solve the unsteady diffusion equation in one dimensional cylindrical coordinates and was applied to two and three dimensional conduction problems in Cartesian coordinates. Heat conduction involving variable thermal conductivity was also investigated. The method was used to solve nonlinear partial differential equations in one and two dimensional Cartesian coordinates. Predicted results are compared to exact solutions where available or to results obtained by other numerical methods.

  6. Analysis of crackling noise using the maximum-likelihood method: Power-law mixing and exponential damping.

    PubMed

    Salje, Ekhard K H; Planes, Antoni; Vives, Eduard

    2017-10-01

    Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.

  7. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  8. A new approach to the extraction of single exponential diode model parameters

    NASA Astrophysics Data System (ADS)

    Ortiz-Conde, Adelmo; García-Sánchez, Francisco J.

    2018-06-01

    A new integration method is presented for the extraction of the parameters of a single exponential diode model with series resistance from the measured forward I-V characteristics. The extraction is performed using auxiliary functions based on the integration of the data which allow to isolate the effects of each of the model parameters. A differentiation method is also presented for data with low level of experimental noise. Measured and simulated data are used to verify the applicability of both proposed method. Physical insight about the validity of the model is also obtained by using the proposed graphical determinations of the parameters.

  9. An efficient quantum algorithm for spectral estimation

    NASA Astrophysics Data System (ADS)

    Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth

    2017-03-01

    We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

  10. A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina

    2010-08-26

    In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries.more » The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.« less

  11. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  12. Colloidal nanocrystals and method of making

    DOEpatents

    Kahen, Keith

    2015-10-06

    A tight confinement nanocrystal comprises a homogeneous center region having a first composition and a smoothly varying region having a second composition wherein a confining potential barrier monotonically increases and then monotonically decreases as the smoothly varying region extends from the surface of the homogeneous center region to an outer surface of the nanocrystal. A method of producing the nanocrystal comprises forming a first solution by combining a solvent and at most two nanocrystal precursors; heating the first solution to a nucleation temperature; adding to the first solution, a second solution having a solvent, at least one additional and different precursor to form the homogeneous center region and at most an initial portion of the smoothly varying region; and lowering the solution temperature to a growth temperature to complete growth of the smoothly varying region.

  13. The mechanism of double-exponential growth in hyper-inflation

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Takayasu, M.; Takayasu, H.

    2002-05-01

    Analyzing historical data of price indices, we find an extraordinary growth phenomenon in several examples of hyper-inflation in which, price changes are approximated nicely by double-exponential functions of time. In order to explain such behavior we introduce the general coarse-graining technique in physics, the Monte Carlo renormalization group method, to the price dynamics. Starting from a microscopic stochastic equation describing dealers’ actions in open markets, we obtain a macroscopic noiseless equation of price consistent with the observation. The effect of auto-catalytic shortening of characteristic time caused by mob psychology is shown to be responsible for the double-exponential behavior.

  14. Real-Time Exponential Curve Fits Using Discrete Calculus

    NASA Technical Reports Server (NTRS)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  15. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  16. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  17. Determination of the functioning parameters in asymmetrical flow field-flow fractionation with an exponential channel.

    PubMed

    Déjardin, P

    2013-08-30

    The flow conditions in normal mode asymmetric flow field-flow fractionation are determined to approach the high retention limit with the requirement d≪l≪w, where d is the particle diameter, l the characteristic length of the sample exponential distribution and w the channel height. The optimal entrance velocity is determined from the solute characteristics, the channel geometry (exponential to rectangular) and the membrane properties, according to a model providing the velocity fields all over the cell length. In addition, a method is proposed for in situ determination of the channel height. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Existence and exponential stability of traveling waves for delayed reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Hsu, Cheng-Hsiung; Yang, Tzi-Sheng; Yu, Zhixian

    2018-03-01

    The purpose of this work is to investigate the existence and exponential stability of traveling wave solutions for general delayed multi-component reaction-diffusion systems. Following the monotone iteration scheme via an explicit construction of a pair of upper and lower solutions, we first obtain the existence of monostable traveling wave solutions connecting two different equilibria. Then, applying the techniques of weighted energy method and comparison principle, we show that all solutions of the Cauchy problem for the considered systems converge exponentially to traveling wave solutions provided that the initial perturbations around the traveling wave fronts belong to a suitable weighted Sobolev space.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Sanjeev, E-mail: sanjeevsharma145@gmail.com; Kumar, Rajendra, E-mail: khundrakpam-ss@yahoo.com; Singh, Kh. S., E-mail: khundrakpam-ss@yahoo.com

    A simple design of broadband one dimensional dielectric/semiconductor multilayer structure having refractive index profile of exponentially graded material has been proposed. The theoretical analysis shows that the proposed structure works as a perfect mirror within a certain wavelength range (1550 nm). In order to calculate the reflection properties a transfer matrix method (TMM) has been used. This property shows that binary graded photonic crystal structures have widened omnidirectional reflector (ODR) bandgap. Hence a exponentially graded photonic crystal structure can be used as a broadband optical reflector and the range of reflection can be tuned to any wavelength region by varying themore » refractive index profile of exponentially graded photonic crystal structure.« less

  20. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment.

    PubMed

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-05-13

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student's t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  1. Multiple-Primitives Hierarchical Classification of Airborne Laser Scanning Data in Urban Areas

    NASA Astrophysics Data System (ADS)

    Ni, H.; Lin, X. G.; Zhang, J. X.

    2017-09-01

    A hierarchical classification method for Airborne Laser Scanning (ALS) data of urban areas is proposed in this paper. This method is composed of three stages among which three types of primitives are utilized, i.e., smooth surface, rough surface, and individual point. In the first stage, the input ALS data is divided into smooth surfaces and rough surfaces by employing a step-wise point cloud segmentation method. In the second stage, classification based on smooth surfaces and rough surfaces is performed. Points in the smooth surfaces are first classified into ground and buildings based on semantic rules. Next, features of rough surfaces are extracted. Then, points in rough surfaces are classified into vegetation and vehicles based on the derived features and Random Forests (RF). In the third stage, point-based features are extracted for the ground points, and then, an individual point classification procedure is performed to classify the ground points into bare land, artificial ground and greenbelt. Moreover, the shortages of the existing studies are analyzed, and experiments show that the proposed method overcomes these shortages and handles more types of objects.

  2. Visual enhancement of unmixed multispectral imagery using adaptive smoothing

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2004-01-01

    Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.

  3. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  4. A Drive Method of Permanent Magnet Synchronous Motor Using Torque Angle Estimation without Position Sensor

    NASA Astrophysics Data System (ADS)

    Tanaka, Takuro; Takahashi, Hisashi

    In some motor applications, it is very difficult to attach a position sensor to the motor in housing. One of the examples of such applications is the dental handpiece-motor. In those designs, it is necessary to drive highly efficiency at low speed and variable load condition without a position sensor. We developed a method to control a motor high-efficient and smoothly at low speed without a position sensor. In this paper, the method in which permanent magnet synchronous motor is controlled smoothly and high-efficient by using torque angle control in synchronized operation is shown. The usefulness is confirmed by experimental results. In conclusion, the proposed sensor-less control method has been achieved to be very efficiently and smoothly.

  5. Piecewise exponential models to assess the influence of job-specific experience on the hazard of acute injury for hourly factory workers

    PubMed Central

    2013-01-01

    Background An inverse relationship between experience and risk of injury has been observed in many occupations. Due to statistical challenges, however, it has been difficult to characterize the role of experience on the hazard of injury. In particular, because the time observed up to injury is equivalent to the amount of experience accumulated, the baseline hazard of injury becomes the main parameter of interest, excluding Cox proportional hazards models as applicable methods for consideration. Methods Using a data set of 81,301 hourly production workers of a global aluminum company at 207 US facilities, we compared competing parametric models for the baseline hazard to assess whether experience affected the hazard of injury at hire and after later job changes. Specific models considered included the exponential, Weibull, and two (a hypothesis-driven and a data-driven) two-piece exponential models to formally test the null hypothesis that experience does not impact the hazard of injury. Results We highlighted the advantages of our comparative approach and the interpretability of our selected model: a two-piece exponential model that allowed the baseline hazard of injury to change with experience. Our findings suggested a 30% increase in the hazard in the first year after job initiation and/or change. Conclusions Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit of interpretability over other similarly flexible models. PMID:23841648

  6. Correction of mid-spatial-frequency errors by smoothing in spin motion for CCOS

    NASA Astrophysics Data System (ADS)

    Zhang, Yizhong; Wei, Chaoyang; Shao, Jianda; Xu, Xueke; Liu, Shijie; Hu, Chen; Zhang, Haichao; Gu, Haojin

    2015-08-01

    Smoothing is a convenient and efficient way to correct mid-spatial-frequency errors. Quantifying the smoothing effect allows improvements in efficiency for finishing precision optics. A series experiments in spin motion are performed to study the smoothing effects about correcting mid-spatial-frequency errors. Some of them use a same pitch tool at different spinning speed, and others at a same spinning speed with different tools. Introduced and improved Shu's model to describe and compare the smoothing efficiency with different spinning speed and different tools. From the experimental results, the mid-spatial-frequency errors on the initial surface were nearly smoothed out after the process in spin motion and the number of smoothing times can be estimated by the model before the process. Meanwhile this method was also applied to smooth the aspherical component, which has an obvious mid-spatial-frequency error after Magnetorheological Finishing processing. As a result, a high precision aspheric optical component was obtained with PV=0.1λ and RMS=0.01λ.

  7. Smooth halos in the cosmic web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description ofmore » the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.« less

  8. Computer programs for smoothing and scaling airfoil coordinates

    NASA Technical Reports Server (NTRS)

    Morgan, H. L., Jr.

    1983-01-01

    Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.

  9. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239

  10. MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-03-01

    Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.

  11. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  12. New exponential synchronization criteria for time-varying delayed neural networks with discontinuous activations.

    PubMed

    Cai, Zuowei; Huang, Lihong; Zhang, Lingling

    2015-05-01

    This paper investigates the problem of exponential synchronization of time-varying delayed neural networks with discontinuous neuron activations. Under the extended Filippov differential inclusion framework, by designing discontinuous state-feedback controller and using some analytic techniques, new testable algebraic criteria are obtained to realize two different kinds of global exponential synchronization of the drive-response system. Moreover, we give the estimated rate of exponential synchronization which depends on the delays and system parameters. The obtained results extend some previous works on synchronization of delayed neural networks not only with continuous activations but also with discontinuous activations. Finally, numerical examples are provided to show the correctness of our analysis via computer simulations. Our method and theoretical results have a leading significance in the design of synchronized neural network circuits involving discontinuous factors and time-varying delays. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Exponential parameter and tracking error convergence guarantees for adaptive controllers without persistency of excitation

    NASA Astrophysics Data System (ADS)

    Chowdhary, Girish; Mühlegg, Maximilian; Johnson, Eric

    2014-08-01

    In model reference adaptive control (MRAC) the modelling uncertainty is often assumed to be parameterised with time-invariant unknown ideal parameters. The convergence of parameters of the adaptive element to these ideal parameters is beneficial, as it guarantees exponential stability, and makes an online learned model of the system available. Most MRAC methods, however, require persistent excitation of the states to guarantee that the adaptive parameters converge to the ideal values. Enforcing PE may be resource intensive and often infeasible in practice. This paper presents theoretical analysis and illustrative examples of an adaptive control method that leverages the increasing ability to record and process data online by using specifically selected and online recorded data concurrently with instantaneous data for adaptation. It is shown that when the system uncertainty can be modelled as a combination of known nonlinear bases, simultaneous exponential tracking and parameter error convergence can be guaranteed if the system states are exciting over finite intervals such that rich data can be recorded online; PE is not required. Furthermore, the rate of convergence is directly proportional to the minimum singular value of the matrix containing online recorded data. Consequently, an online algorithm to record and forget data is presented and its effects on the resulting switched closed-loop dynamics are analysed. It is also shown that when radial basis function neural networks (NNs) are used as adaptive elements, the method guarantees exponential convergence of the NN parameters to a compact neighbourhood of their ideal values without requiring PE. Flight test results on a fixed-wing unmanned aerial vehicle demonstrate the effectiveness of the method.

  14. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  15. Exponential Nutrient Loading as a Means to Optimize Bareroot Nursery Fertility of Oak Species

    Treesearch

    Zonda K. D. Birge; Douglass F. Jacobs; Francis K. Salifu

    2006-01-01

    Conventional fertilization in nursery culture of hardwoods may involve supply of equal fertilizer doses at regularly spaced intervals during the growing season, which may create a surplus of available nutrients in the beginning and a deficiency in nutrient availability by the end of the growing season. A method of fertilization termed “exponential nutrient loading” has...

  16. A Spectral Lyapunov Function for Exponentially Stable LTV Systems

    NASA Technical Reports Server (NTRS)

    Zhu, J. Jim; Liu, Yong; Hang, Rui

    2010-01-01

    This paper presents the formulation of a Lyapunov function for an exponentially stable linear timevarying (LTV) system using a well-defined PD-spectrum and the associated PD-eigenvectors. It provides a bridge between the first and second methods of Lyapunov for stability assessment, and will find significant applications in the analysis and control law design for LTV systems and linearizable nonlinear time-varying systems.

  17. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  18. Bi-exponential T2 analysis of healthy and diseased Achilles tendons: an in vivo preliminary magnetic resonance study and correlation with clinical score.

    PubMed

    Juras, Vladimir; Apprich, Sebastian; Szomolanyi, Pavol; Bieri, Oliver; Deligianni, Xeni; Trattnig, Siegfried

    2013-10-01

    To compare mono- and bi-exponential T2 analysis in healthy and degenerated Achilles tendons using a recently introduced magnetic resonance variable-echo-time sequence (vTE) for T2 mapping. Ten volunteers and ten patients were included in the study. A variable-echo-time sequence was used with 20 echo times. Images were post-processed with both techniques, mono- and bi-exponential [T2 m, short T2 component (T2 s) and long T2 component (T2 l)]. The number of mono- and bi-exponentially decaying pixels in each region of interest was expressed as a ratio (B/M). Patients were clinically assessed with the Achilles Tendon Rupture Score (ATRS), and these values were correlated with the T2 values. The means for both T2 m and T2 s were statistically significantly different between patients and volunteers; however, for T2 s, the P value was lower. In patients, the Pearson correlation coefficient between ATRS and T2 s was -0.816 (P = 0.007). The proposed variable-echo-time sequence can be successfully used as an alternative method to UTE sequences with some added benefits, such as a short imaging time along with relatively high resolution and minimised blurring artefacts, and minimised susceptibility artefacts and chemical shift artefacts. Bi-exponential T2 calculation is superior to mono-exponential in terms of statistical significance for the diagnosis of Achilles tendinopathy. • Magnetic resonance imaging offers new insight into healthy and diseased Achilles tendons • Bi-exponential T2 calculation in Achilles tendons is more beneficial than mono-exponential • A short T2 component correlates strongly with clinical score • Variable echo time sequences successfully used instead of ultrashort echo time sequences.

  19. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  20. Local/non-local regularized image segmentation using graph-cuts: application to dynamic and multispectral MRI.

    PubMed

    Hanson, Erik A; Lundervold, Arvid

    2013-11-01

    Multispectral, multichannel, or time series image segmentation is important for image analysis in a wide range of applications. Regularization of the segmentation is commonly performed using local image information causing the segmented image to be locally smooth or piecewise constant. A new spatial regularization method, incorporating non-local information, was developed and tested. Our spatial regularization method applies to feature space classification in multichannel images such as color images and MR image sequences. The spatial regularization involves local edge properties, region boundary minimization, as well as non-local similarities. The method is implemented in a discrete graph-cut setting allowing fast computations. The method was tested on multidimensional MRI recordings from human kidney and brain in addition to simulated MRI volumes. The proposed method successfully segment regions with both smooth and complex non-smooth shapes with a minimum of user interaction.

  1. Phytoplankton productivity in relation to light intensity: A simple equation

    USGS Publications Warehouse

    Peterson, D.H.; Perry, M.J.; Bencala, K.E.; Talbot, M.C.

    1987-01-01

    A simple exponential equation is used to describe photosynthetic rate as a function of light intensity for a variety of unicellular algae and higher plants where photosynthesis is proportional to (1-e-??1). The parameter ?? (=Ik-1) is derived by a simultaneous curve-fitting method, where I is incident quantum-flux density. The exponential equation is tested against a wide range of data and is found to adequately describe P vs. I curves. The errors associated with photosynthetic parameters are calculated. A simplified statistical model (Poisson) of photon capture provides a biophysical basis for the equation and for its ability to fit a range of light intensities. The exponential equation provides a non-subjective simultaneous curve fitting estimate for photosynthetic efficiency (a) which is less ambiguous than subjective methods: subjective methods assume that a linear region of the P vs. I curve is readily identifiable. Photosynthetic parameters ?? and a are used widely in aquatic studies to define photosynthesis at low quantum flux. These parameters are particularly important in estuarine environments where high suspended-material concentrations and high diffuse-light extinction coefficients are commonly encountered. ?? 1987.

  2. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  3. Data preparation for functional data analysis of PM10 in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaadan, Norshahida; Jemain, Abdul Aziz; Deni, Sayang Mohd

    2014-07-01

    The use of curves or functional data in the study analysis is increasingly gaining momentum in the various fields of research. The statistical method to analyze such data is known as functional data analysis (FDA). The first step in FDA is to convert the observed data points which are repeatedly recorded over a period of time or space into either a rough (raw) or smooth curve. In the case of the smooth curve, basis functions expansion is one of the methods used for the data conversion. The data can be converted into a smooth curve either by using the regression smoothing or roughness penalty smoothing approach. By using the regression smoothing approach, the degree of curve's smoothness is very dependent on k number of basis functions; meanwhile for the roughness penalty approach, the smoothness is dependent on a roughness coefficient given by parameter λ Based on previous studies, researchers often used the rather time-consuming trial and error or cross validation method to estimate the appropriate number of basis functions. Thus, this paper proposes a statistical procedure to construct functional data or curves for the hourly and daily recorded data. The Bayesian Information Criteria is used to determine the number of basis functions while the Generalized Cross Validation criteria is used to identify the parameter λ The proposed procedure is then applied on a ten year (2001-2010) period of PM10 data from 30 air quality monitoring stations that are located in Peninsular Malaysia. It was found that the number of basis functions required for the construction of the PM10 daily curve in Peninsular Malaysia was in the interval of between 14 and 20 with an average value of 17; the first percentile is 15 and the third percentile is 19. Meanwhile the initial value of the roughness coefficient was in the interval of between 10-5 and 10-7 and the mode was 10-6. An example of the functional descriptive analysis is also shown.

  4. Simple data-smoothing and noise-suppression technique

    NASA Technical Reports Server (NTRS)

    Duty, R. L.

    1970-01-01

    Algorithm, based on the Borel method of summing divergent sequences, is used for smoothing noisy data where knowledge of frequency content is not required. Technique's effectiveness is demonstrated by a series of graphs.

  5. Local denoising of digital speckle pattern interferometry fringes by multiplicative correlation and weighted smoothing splines.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2005-05-10

    We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.

  6. The Comparison Study of Quadratic Infinite Beam Program on Optimization Instensity Modulated Radiation Therapy Treatment Planning (IMRTP) between Threshold and Exponential Scatter Method with CERR® In The Case of Lung Cancer

    NASA Astrophysics Data System (ADS)

    Hardiyanti, Y.; Haekal, M.; Waris, A.; Haryanto, F.

    2016-08-01

    This research compares the quadratic optimization program on Intensity Modulated Radiation Therapy Treatment Planning (IMRTP) with the Computational Environment for Radiotherapy Research (CERR) software. We assumed that the number of beams used for the treatment planner was about 9 and 13 beams. The case used the energy of 6 MV with Source Skin Distance (SSD) of 100 cm from target volume. Dose calculation used Quadratic Infinite beam (QIB) from CERR. CERR was used in the comparison study between Gauss Primary threshold method and Gauss Primary exponential method. In the case of lung cancer, the threshold variation of 0.01, and 0.004 was used. The output of the dose was distributed using an analysis in the form of DVH from CERR. The maximum dose distributions obtained were on the target volume (PTV) Planning Target Volume, (CTV) Clinical Target Volume, (GTV) Gross Tumor Volume, liver, and skin. It was obtained that if the dose calculation method used exponential and the number of beam 9. When the dose calculation method used the threshold and the number of beam 13, the maximum dose distributions obtained were on the target volume PTV, GTV, heart, and skin.

  7. Targeting smooth emergence: the effect site concentration of remifentanil for preventing cough during emergence during propofol-remifentanil anaesthesia for thyroid surgery.

    PubMed

    Lee, B; Lee, J-R; Na, S

    2009-06-01

    The administration of short-acting opioids can be a reliable and safe method to prevent coughing during emergence from anaesthesia but the proper dose or effect site concentration of remifentanil for this purpose has not been reported. We therefore investigated the effect site concentration (Ce) of remifentanil for preventing cough during emergence from anaesthesia with propofol-remifentanil target-controlled infusion. Twenty-three ASA I-II grade female patients, aged 23-66 yr undergoing elective thyroidectomy were enrolled in this study. EC(50) and EC(95) of remifentanil for preventing cough were determined using Dixon's up-and-down method and probit analysis. Propofol effect site concentration at extubation, mean arterial pressure, and heart rate (HR) were compared in patients with smooth emergence and without smooth emergence. Three out of 11 patients with remifentanil Ce of 1.5 ng ml(-1) and all seven patients with Ce of 2.0 ng ml(-1) did not cough during emergence; the EC(50) of remifentanil that suppressed coughing was 1.46 ng ml(-1) by Dixon's up-and-down method, and EC(95) was 2.14 ng ml(-1) by probit analysis. Effect site concentration of propofol at awakening was similar in patients with a smooth emergence and those without smooth emergence, but HR and arterial pressure were higher in those who coughed during emergence. Clinically significant hypoventilation was not seen in any patient. We found that the EC(95) of effect site concentration of remifentanil to suppress coughing at emergence from anaesthesia was 2.14 ng ml(-1). Maintaining an established Ce of remifentanil is a reliable method of abolishing cough and thereby targeting smooth emergence from anaesthesia.

  8. A comparison of spatial analysis methods for the construction of topographic maps of retinal cell density.

    PubMed

    Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.

  9. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.

    PubMed

    Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H

    2016-02-27

    Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.

  10. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  11. One-dimensional backreacting holographic superconductors with exponential nonlinear electrodynamics

    NASA Astrophysics Data System (ADS)

    Ghotbabadi, B. Binaei; Zangeneh, M. Kord; Sheykhi, A.

    2018-05-01

    In this paper, we investigate the effects of nonlinear exponential electrodynamics as well as backreaction on the properties of one-dimensional s-wave holographic superconductors. We continue our study both analytically and numerically. In analytical study, we employ the Sturm-Liouville method while in numerical approach we perform the shooting method. We obtain a relation between the critical temperature and chemical potential analytically. Our results show a good agreement between analytical and numerical methods. We observe that the increase in the strength of both nonlinearity and backreaction parameters causes the formation of condensation in the black hole background harder and critical temperature lower. These results are consistent with those obtained for two dimensional s-wave holographic superconductors.

  12. A spatial scan statistic for survival data based on Weibull distribution.

    PubMed

    Bhatt, Vijaya; Tiwari, Neeraj

    2014-05-20

    The spatial scan statistic has been developed as a geographical cluster detection analysis tool for different types of data sets such as Bernoulli, Poisson, ordinal, normal and exponential. We propose a scan statistic for survival data based on Weibull distribution. It may also be used for other survival distributions, such as exponential, gamma, and log normal. The proposed method is applied on the survival data of tuberculosis patients for the years 2004-2005 in Nainital district of Uttarakhand, India. Simulation studies reveal that the proposed method performs well for different survival distribution functions. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Stability in Cohen Grossberg-type bidirectional associative memory neural networks with time-varying delays

    NASA Astrophysics Data System (ADS)

    Cao, Jinde; Song, Qiankun

    2006-07-01

    In this paper, the exponential stability problem is investigated for a class of Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. By using the analysis method, inequality technique and the properties of an M-matrix, several novel sufficient conditions ensuring the existence, uniqueness and global exponential stability of the equilibrium point are derived. Moreover, the exponential convergence rate is estimated. The obtained results are less restrictive than those given in the earlier literature, and the boundedness and differentiability of the activation functions and differentiability of the time-varying delays are removed. Two examples with their simulations are given to show the effectiveness of the obtained results.

  14. Rotating flow of a nanofluid due to an exponentially stretching surface with suction

    NASA Astrophysics Data System (ADS)

    Salleh, Siti Nur Alwani; Bachok, Norfifah; Arifin, Norihan Md

    2017-08-01

    An analysis of the rotating nanofluid flow past an exponentially stretched surface with the presence of suction is studied in this work. Three different types of nanoparticles, namely, copper, titania and alumina are considered. The system of ordinary differential equations is computed numerically using a shooting method in Maple software after being transformed from the partial differential equations. This transformation has considered the similarity transformations in exponential form. The physical effect of the rotation, suction and nanoparticle volume fraction parameters on the rotating flow and heat transfer phenomena is investigated and has been described in detail through graphs. The dual solutions are found to appear when the governing parameters reach a certain range.

  15. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  16. Exponentially Stabilizing Robot Control Laws

    NASA Technical Reports Server (NTRS)

    Wen, John T.; Bayard, David S.

    1990-01-01

    New class of exponentially stabilizing laws for joint-level control of robotic manipulators introduced. In case of set-point control, approach offers simplicity of proportion/derivative control architecture. In case of tracking control, approach provides several important alternatives to completed-torque method, as far as computational requirements and convergence. New control laws modified in simple fashion to obtain asymptotically stable adaptive control, when robot model and/or payload mass properties unknown.

  17. Numerically stable formulas for a particle-based explicit exponential integrator

    NASA Astrophysics Data System (ADS)

    Nadukandi, Prashanth

    2015-05-01

    Numerically stable formulas are presented for the closed-form analytical solution of the X-IVAS scheme in 3D. This scheme is a state-of-the-art particle-based explicit exponential integrator developed for the particle finite element method. Algebraically, this scheme involves two steps: (1) the solution of tangent curves for piecewise linear vector fields defined on simplicial meshes and (2) the solution of line integrals of piecewise linear vector-valued functions along these tangent curves. Hence, the stable formulas presented here have general applicability, e.g. exact integration of trajectories in particle-based (Lagrangian-type) methods, flow visualization and computer graphics. The Newton form of the polynomial interpolation definition is used to express exponential functions of matrices which appear in the analytical solution of the X-IVAS scheme. The divided difference coefficients in these expressions are defined in a piecewise manner, i.e. in a prescribed neighbourhood of removable singularities their series approximations are computed. An optimal series approximation of divided differences is presented which plays a critical role in this methodology. At least ten significant decimal digits in the formula computations are guaranteed to be exact using double-precision floating-point arithmetic. The worst case scenarios occur in the neighbourhood of removable singularities found in fourth-order divided differences of the exponential function.

  18. Geographic analysis of road accident severity index in Nigeria.

    PubMed

    Iyanda, Ayodeji E

    2018-05-28

    Before 2030, deaths from road traffic accidents (RTAs) will surpass cerebrovascular disease, tuberculosis, and HIV/AIDS. Yet, there is little knowledge on the geographic distribution of RTA severity in Nigeria. Accident Severity Index is the proportion of deaths that result from a road accident. This study analysed the geographic pattern of RTA severity based on the data retrieved from Federal Road Safety Corps (FRSC). The study predicted a two-year data from a historic road accident data using exponential smoothing technique. To determine spatial autocorrelation, global and local indicators of spatial association were implemented in a geographic information system. Results show significant clusters of high RTA severity among states in the northeast and the northwest of Nigeria. Hence, the findings are discussed from two perspectives: Road traffic law compliance and poor emergency response. Conclusion, the severity of RTA is high in the northern states of Nigeria, hence, RTA remains a public health concern.

  19. Dynamics of skirting droplets

    NASA Astrophysics Data System (ADS)

    Akers, Caleb; Hale, Jacob

    2014-11-01

    It has been observed that non-coalescence between a droplet and pool of like fluid can be prolonged or inhibited by sustained relative motion between the two fluids. In this study, we quantitatively describe the motion of freely moving droplets that skirt across the surface of a still pool of like fluid. Droplets of different sizes and small Weber number were directed horizontally onto the pool surface. After stabilization of the droplet shape after impact, the droplets smoothly moved across the surface, slowing until coalescence. Using high-speed imaging, we recorded the droplet's trajectory from a top-down view as well as side views both slightly above and below the fluid surface. The droplets' speed is observed to decrease exponentially, with the smaller droplets slowing down at a greater rate. Droplets infused with neutral density micro beads showed that the droplet rolls along the surface of the pool. A qualitative model of this motion is presented.

  20. Local Stable and Unstable Manifolds and Their Control in Nonautonomous Finite-Time Flows

    NASA Astrophysics Data System (ADS)

    Balasuriya, Sanjeeva

    2016-08-01

    It is well known that stable and unstable manifolds strongly influence fluid motion in unsteady flows. These emanate from hyperbolic trajectories, with the structures moving nonautonomously in time. The local directions of emanation at each instance in time is the focus of this article. Within a nearly autonomous setting, it is shown that these time-varying directions can be characterised through the accumulated effect of velocity shear. Connections to Oseledets spaces and projection operators in exponential dichotomies are established. Availability of data for both infinite- and finite-time intervals is considered. With microfluidic flow control in mind, a methodology for manipulating these directions in any prescribed time-varying fashion by applying a local velocity shear is developed. The results are verified for both smoothly and discontinuously time-varying directions using finite-time Lyapunov exponent fields, and excellent agreement is obtained.

  1. Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Abarzhi, Snezhana I.; Bhowmich, Aklant K.; Dell, Zachary R.; Pandian, Arun; Stanic, Milos; Stellingwerf, Robert F.; Swisher, Nora C.

    2017-10-01

    We focus on classical problem of dependence on the initial conditions of the initial growth-rate of strong shocks driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics (SPH) simulations to describe the simulations data with statistical confidence in a broad parameter regime. For given values of the shock strength, fluids' density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio, and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data. National Science Foundation, USA.

  2. Maximum initial growth-rate of strong-shock-driven Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Abarzhi, Snezhana I.; Bhowmich, Aklant K.; Dell, Zachary R.; Pandian, Arun; Stanic, Milos; Stellingwerf, Robert F.; Swisher, Nora C.

    2017-11-01

    We focus on classical problem of dependence on the initial conditions of the initial growth-rate of strong shocks driven Richtmyer-Meshkov instability (RMI) by developing a novel empirical model and by employing rigorous theories and Smoothed Particle Hydrodynamics (SPH) simulations to describe the simulations data with statistical confidence in a broad parameter regime. For given values of the shock strength, fluids' density ratio, and wavelength of the initial perturbation of the fluid interface, we find the maximum value of RMI initial growth-rate, the corresponding amplitude scale of the initial perturbation, and the maximum fraction of interfacial energy. This amplitude scale is independent of the shock strength and density ratio, and is characteristic quantity of RMI dynamics. We discover the exponential decay of the ratio of the initial and linear growth-rates of RMI with the initial perturbation amplitude that excellently agrees with available data. National Science Foundation, USA.

  3. Modeling Day-to-day Flow Dynamics on Degradable Transport Network

    PubMed Central

    Gao, Bo; Zhang, Ronghui; Lou, Xiaoming

    2016-01-01

    Stochastic link capacity degradations are common phenomena in transport network which can cause travel time variations and further can affect travelers’ daily route choice behaviors. This paper formulates a deterministic dynamic model, to capture the day-to-day (DTD) flow evolution process in the presence of degraded link capacity degradations. The aggregated network flow dynamics are driven by travelers’ study of uncertain travel time and their choice of risky routes. This paper applies the exponential-smoothing filter to describe travelers’ study of travel time variations, and meanwhile formulates risk attitude parameter updating equation to reflect travelers’ endogenous risk attitude evolution schema. In addition, this paper conducts theoretical analyses to investigate several significant mathematical characteristics implied in the proposed DTD model, including fixed point existence, uniqueness, stability and irreversibility. Numerical experiments are used to demonstrate the effectiveness of the DTD model and verify some important dynamic system properties. PMID:27959903

  4. Analytical investigation of the dynamics of tethered constellations in Earth orbit, phase 2

    NASA Technical Reports Server (NTRS)

    Lorenzini, Enrico C.

    1987-01-01

    A control law was developed to control the elevator during short-distance maneuvers along the tether of a 4-mass tethered system. This control law (called retarded exponential or RE) was analyzed parametrically in order to assess which control parameters provide a good dynamic response and a smooth time history of the acceleration on board the elevator. The short-distance maneuver under investigation consists of a slow crawling of the elevator over the distance of 10 m that represents a typical maneuver for fine tuning the acceleration level on board the elevator. The contribution of aerodynamic and thermal perturbations upon acceleration levels was also evaluated and acceleration levels obtained when such pertubations are taken into account were compared to those obtained by neglecting the thermal and aerodynamic forces. In addition, the preparation of a tether simulation questionnaire is illustrated. Analytic solutions to be compared to numerical cases and simulator test cases are also discussed.

  5. At a crossroads: reentry challenges and healthcare needs among homeless female ex-offenders.

    PubMed

    Salem, Benissa E; Nyamathi, Adeline; Idemundia, Faith; Slaughter, Regina; Ames, Masha

    2013-01-01

    The exponential increase in the number of women parolees and probationers in the last decade has made women the most rapidly growing group of offenders in the United States. The purpose of this descriptive, qualitative study is to understand the unique gendered experiences of homeless female ex-offenders, in the context of healthcare needs, types of health services sought, and gaps in order to help them achieve a smooth transition post prison release. Focus group qualitative methodology was utilized to engage 14 female ex-offenders enrolled in a residential drug treatment program in Southern California. The findings suggested that for homeless female ex-offenders, there are a myriad of healthcare challenges, knowledge deficits, and barriers to moving forward in life, which necessitates strategies to prevent relapse. These findings support the development of gender-sensitive programs for preventing or reducing drug and alcohol use, recidivism, and sexually transmitted infections among this hard-to-reach population.

  6. Application of a Short Intracellular pH Method to Flow Cytometry for Determining Saccharomyces cerevisiae Vitality ▿

    PubMed Central

    Weigert, Claudia; Steffler, Fabian; Kurz, Tomas; Shellhammer, Thomas H.; Methner, Frank-Jürgen

    2009-01-01

    The measurement of yeast's intracellular pH (ICP) is a proven method for determining yeast vitality. Vitality describes the condition or health of viable cells as opposed to viability, which defines living versus dead cells. In contrast to fluorescence photometric measurements, which show only average ICP values of a population, flow cytometry allows the presentation of an ICP distribution. By examining six repeated propagations with three separate growth phases (lag, exponential, and stationary), the ICP method previously established for photometry was transferred successfully to flow cytometry by using the pH-dependent fluorescent probe 5,6-carboxyfluorescein. The correlation between the two methods was good (r2 = 0.898, n = 18). With both methods it is possible to track the course of growth phases. Although photometry did not yield significant differences between exponentially and stationary phases (P = 0.433), ICP via flow cytometry did (P = 0.012). Yeast in an exponential phase has a unimodal ICP distribution, reflective of a homogeneous population; however, yeast in a stationary phase displays a broader ICP distribution, and subpopulations could be defined by using the flow cytometry method. In conclusion, flow cytometry yielded specific evidence of the heterogeneity in vitality of a yeast population as measured via ICP. In contrast to photometry, flow cytometry increases information about the yeast population's vitality via a short measurement, which is suitable for routine analysis. PMID:19581482

  7. Investigation of noise in gear transmissions by the method of mathematical smoothing of experiments

    NASA Technical Reports Server (NTRS)

    Sheftel, B. T.; Lipskiy, G. K.; Ananov, P. P.; Chernenko, I. K.

    1973-01-01

    A rotatable central component smoothing method is used to analyze rotating gear noise spectra. A matrix is formulated in which the randomized rows correspond to various tests and the columns to factor values. Canonical analysis of the obtained regression equation permits the calculation of optimal speed and load at a previous assigned noise level.

  8. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    PubMed Central

    Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong

    2016-01-01

    In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405

  9. Adaptation of a cubic smoothing spline algortihm for multi-channel data stitching at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C; Adcock, A; Azevedo, S

    2010-12-28

    Some diagnostics at the National Ignition Facility (NIF), including the Gamma Reaction History (GRH) diagnostic, require multiple channels of data to achieve the required dynamic range. These channels need to be stitched together into a single time series, and they may have non-uniform and redundant time samples. We chose to apply the popular cubic smoothing spline technique to our stitching problem because we needed a general non-parametric method. We adapted one of the algorithms in the literature, by Hutchinson and deHoog, to our needs. The modified algorithm and the resulting code perform a cubic smoothing spline fit to multiple datamore » channels with redundant time samples and missing data points. The data channels can have different, time-varying, zero-mean white noise characteristics. The method we employ automatically determines an optimal smoothing level by minimizing the Generalized Cross Validation (GCV) score. In order to automatically validate the smoothing level selection, the Weighted Sum-Squared Residual (WSSR) and zero-mean tests are performed on the residuals. Further, confidence intervals, both analytical and Monte Carlo, are also calculated. In this paper, we describe the derivation of our cubic smoothing spline algorithm. We outline the algorithm and test it with simulated and experimental data.« less

  10. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  11. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  12. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  13. Interpretation of changes in water level accompanying fault creep and implications for earthquake prediction.

    USGS Publications Warehouse

    Wesson, R.L.

    1981-01-01

    Quantitative calculations for the effect of a fault creep event on observations of changes in water level in wells provide an approach to the tectonic interpretation of these phenomena. For the pore pressure field associated with an idealized creep event having an exponential displacement versus time curve, an analytic expression has been obtained in terms of exponential-integral functions. The pore pressure versus time curves for observation points near the fault are pulselike; a sharp pressure increase (or decrease, depending on the direction of propagation) is followed by more gradual decay to the normal level after the creep event. The time function of the water level change may be obtained by applying the filter - derived by A.G.Johnson and others to determine the influence of atmospheric pressure on water level - to the analytic pore pressure versus time curves. The resulting water level curves show a fairly rapid increase (or decrease) and then a very gradual return to normal. The results of this analytic model do not reproduce the steplike changes in water level observed by Johnson and others. If the procedure used to obtain the water level from the pore pressure is correct, these results suggest that steplike changes in water level are not produced by smoothly propagating creep events but by creep events that propagate discontinuously, by changes in the bulk properties of the region around the well, or by some other mechanism.-Author

  14. The Koslowski-Sahlmann representation: quantum configuration space

    NASA Astrophysics Data System (ADS)

    Campiglia, Miguel; Varadarajan, Madhavan

    2014-09-01

    The Koslowski-Sahlmann (KS) representation is a generalization of the representation underlying the discrete spatial geometry of loop quantum gravity (LQG), to accommodate states labelled by smooth spatial geometries. As shown recently, the KS representation supports, in addition to the action of the holonomy and flux operators, the action of operators which are the quantum counterparts of certain connection dependent functions known as ‘background exponentials’. Here we show that the KS representation displays the following properties which are the exact counterparts of LQG ones: (i) the abelian * algebra of SU(2) holonomies and ‘U(1)’ background exponentials can be completed to a C* algebra, (ii) the space of semianalytic SU(2) connections is topologically dense in the spectrum of this algebra, (iii) there exists a measure on this spectrum for which the KS Hilbert space is realized as the space of square integrable functions on the spectrum, (iv) the spectrum admits a characterization as a projective limit of finite numbers of copies of SU(2) and U(1), (v) the algebra underlying the KS representation is constructed from cylindrical functions and their derivations in exactly the same way as the LQG (holonomy-flux) algebra except that the KS cylindrical functions depend on the holonomies and the background exponentials, this extra dependence being responsible for the differences between the KS and LQG algebras. While these results are obtained for compact spaces, they are expected to be of use for the construction of the KS representation in the asymptotically flat case.

  15. Terpolymer polyrotaxanes: a promising supramolecular system as electron-transporting materials for optoelectronics

    NASA Astrophysics Data System (ADS)

    Farcas, A.; Resmerita, A.-M.; Farcas, F.

    2016-12-01

    Optical, electrochemical and surface-morphological properties of three terpolymer polyrotaxanes (1a, 1b and 1c) composed of 2,7-dibromo-9,9-dicyanomethylenefluorene encapsulated into γ-cyclodextrin (γCD), β- or γ-persilylated cyclodextrin (PS-γCD, PS-γCD) cavities (acceptor) and 4,4'-dibromo-4''-methyltriphenylamine (donor) randomly distributed into 9,9-dioctylfluorene conjugated chains have been evaluated and compared to those of the reference 1. The role of the encapsulation on the thermal stability, solubility, film forming ability and transparency was also investigated. High fluorescence efficiency, almost identical normalized absorbance maximum in solution and solid-states of 1a, 1b and 1c provides the lower aggregation tendency. The fluorescence lifetimes (τ) of 1a, 1b and 1c follow a mono-exponential decay with a value τ = 1.11, 1.03 and 1.14 ns, compared with the neat 1, where a bi-exponential decay was identified. AFM studies reveal a smooth and homogenous surface morphology for polyrotaxanes than that of the reference. The electrochemical data provided that the investigated compounds exhibited n- and p-doping processes. The HOMO/LUMO energy levels 1a, 1b, 1c and 1 and in combination with the work function of anodic ITO glass substrates coated with poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) (-5.2 eV) and cathodic Ca (-2.8 eV) or Al (-2.2 eV) indicate that the compounds are electrochemically accessible as electron-transporting materials.

  16. The temporal analysis of yeast exponential phase using shotgun proteomics as a fermentation monitoring technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Eric L.; Orsat, Valerie; Shah, Manesh B

    2012-01-01

    System biology and bioprocess technology can be better understood using shotgun proteomics as a monitoring system during the fermentation. We demonstrated a shotgun proteomic method to monitor the temporal yeast proteome in early, middle and late exponential phases. Our study identified a total of 1389 proteins combining all 2D-LC-MS/MS runs. The temporal Saccharomyces cerevisiae proteome was enriched with proteolysis, radical detoxification, translation, one-carbon metabolism, glycolysis and TCA cycle. Heat shock proteins and proteins associated with oxidative stress response were found throughout the exponential phase. The most abundant proteins observed were translation elongation factors, ribosomal proteins, chaperones and glycolytic enzymes. Themore » high abundance of the H-protein of the glycine decarboxylase complex (Gcv3p) indicated the availability of glycine in the environment. We observed differentially expressed proteins and the induced proteins at mid-exponential phase were involved in ribosome biogenesis, mitochondria DNA binding/replication and transcriptional activator. Induction of tryptophan synthase (Trp5p) indicated the abundance of tryptophan during the fermentation. As fermentation progressed toward late exponential phase, a decrease in cell proliferation was implied from the repression of ribosomal proteins, transcription coactivators, methionine aminopeptidase and translation-associated proteins.« less

  17. Microscopic morphology evolution during ion beam smoothing of Zerodur® surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2014-01-13

    Ion sputtering of Zerodur material often results in the formation of nanoscale microstructures on the surfaces, which seriously influences optical surface quality. In this paper, we describe the microscopic morphology evolution during ion sputtering of Zerodur surfaces through experimental researches and theoretical analysis, which shows that preferential sputtering together with curvature-dependent sputtering overcomes ion-induced smoothing mechanisms leading to granular nanopatterns formation in morphology and the coarsening of the surface. Consequently, we propose a new method for ion beam smoothing (IBS) of Zerodur optics assisted by deterministic ion beam material adding (IBA) technology. With this method, Zerodur optics with surface roughness down to 0.15 nm root mean square (RMS) level is obtained through the experimental investigation, which demonstrates the feasibility of our proposed method.

  18. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  19. Determination of bulk and interface density of states in metal oxide semiconductor thin-film transistors by using capacitance-voltage characteristics

    NASA Astrophysics Data System (ADS)

    Wei, Xixiong; Deng, Wanling; Fang, Jielin; Ma, Xiaoyu; Huang, Junkai

    2017-10-01

    A physical-based straightforward extraction technique for interface and bulk density of states in metal oxide semiconductor thin film transistors (TFTs) is proposed by using the capacitance-voltage (C-V) characteristics. The interface trap density distribution with energy has been extracted from the analysis of capacitance-voltage characteristics. Using the obtained interface state distribution, the bulk trap density has been determined. With this method, for the interface trap density, it is found that deep state density nearing the mid-gap is approximately constant and tail states density increases exponentially with energy; for the bulk trap density, it is a superposition of exponential deep states and exponential tail states. The validity of the extraction is verified by comparisons with the measured current-voltage (I-V) characteristics and the simulation results by the technology computer-aided design (TCAD) model. This extraction method uses non-numerical iteration which is simple, fast and accurate. Therefore, it is very useful for TFT device characterization.

  20. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  1. Control Strategies for Smoothing of Output Power of Wind Energy Conversion Systems

    NASA Astrophysics Data System (ADS)

    Pratap, Alok; Urasaki, Naomitsu; Senju, Tomonobu

    2013-10-01

    This article presents a control method for output power smoothing of a wind energy conversion system (WECS) with a permanent magnet synchronous generator (PMSG) using the inertia of wind turbine and the pitch control. The WECS used in this article adopts an AC-DC-AC converter system. The generator-side converter controls the torque of the PMSG, while the grid-side inverter controls the DC-link and grid voltages. For the generator-side converter, the torque command is determined by using the fuzzy logic. The inputs of the fuzzy logic are the operating point of the rotational speed of the PMSG and the difference between the wind turbine torque and the generator torque. By means of the proposed method, the generator torque is smoothed, and the kinetic energy stored by the inertia of the wind turbine can be utilized to smooth the output power fluctuations of the PMSG. In addition, the wind turbines shaft stress is mitigated compared to a conventional maximum power point tracking control. Effectiveness of the proposed method is verified by the numerical simulations.

  2. Error detection and data smoothing based on local procedures

    NASA Technical Reports Server (NTRS)

    Guerra, V. M.

    1974-01-01

    An algorithm is presented which is able to locate isolated bad points and correct them without contaminating the rest of the good data. This work has been greatly influenced and motivated by what is currently done in the manual loft. It is not within the scope of this work to handle small random errors characteristic of a noisy system, and it is therefore assumed that the bad points are isolated and relatively few when compared with the total number of points. Motivated by the desire to imitate the loftsman a visual experiment was conducted to determine what is considered smooth data. This criterion is used to determine how much the data should be smoothed and to prove that this method produces such data. The method utimately converges to a set of points that lies on the polynomial that interpolates the first and last points; however convergence to such a set is definitely not the purpose of our algorithm. The proof of convergence is necessary to demonstrate that oscillation does not take place and that in a finite number of steps the method produces a set as smooth as desired.

  3. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  4. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  5. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  6. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  7. Improved result on stability analysis of discrete stochastic neural networks with time delay

    NASA Astrophysics Data System (ADS)

    Wu, Zhengguang; Su, Hongye; Chu, Jian; Zhou, Wuneng

    2009-04-01

    This Letter investigates the problem of exponential stability for discrete stochastic time-delay neural networks. By defining a novel Lyapunov functional, an improved delay-dependent exponential stability criterion is established in terms of linear matrix inequality (LMI) approach. Meanwhile, the computational complexity of the newly established stability condition is reduced because less variables are involved. Numerical example is given to illustrate the effectiveness and the benefits of the proposed method.

  8. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  9. Enhancement of surface definition and gridding in the EAGLE code

    NASA Technical Reports Server (NTRS)

    Thompson, Joe F.

    1991-01-01

    Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.

  10. GEE-Smoothing Spline in Semiparametric Model with Correlated Nominal Data

    NASA Astrophysics Data System (ADS)

    Ibrahim, Noor Akma; Suliadi

    2010-11-01

    In this paper we propose GEE-Smoothing spline in the estimation of semiparametric models with correlated nominal data. The method can be seen as an extension of parametric generalized estimating equation to semiparametric models. The nonparametric component is estimated using smoothing spline specifically the natural cubic spline. We use profile algorithm in the estimation of both parametric and nonparametric components. The properties of the estimators are evaluated using simulation studies.

  11. Partitioning of monomethylmercury between freshwater algae and water.

    PubMed

    Miles, C J; Moye, H A; Phlips, E J; Sargent, B

    2001-11-01

    Phytoplankton-water monomethylmercury (MeHg) partition constants (KpI) have been determined in the laboratory for two green algae Selenastrum capricornutum and Cosmarium botrytis, the blue-green algae Schizothrix calcicola, and the diatom Thallasiosira spp., algal species that are commonly found in natural surface waters. Two methods were used to determine KpI, the Freundlich isotherm method and the flow-through/dialysis bag method. Both methods yielded KpI values of about 10(6.6) for S. capricornutum and were not significantly different. The KpI for the four algae studied were similar except for Schizothrix, which was significantly lower than S. capricornutum. The KpI for MeHg and S. capricornutum (exponential growth) was not significantly different in systems with predominantly MeHgOH or MeHgCl species. This is consistent with other studies that show metal speciation controls uptake kinetics, but the reactivity with intracellular components controls steady-state concentrations. Partitioning constants determined with exponential and stationary phase S. capricornutum cells at the same conditions were not significantly different, while the partitioning constant for exponential phase, phosphorus-limited cells was significantly lower, suggesting that P-limitation alters the ecophysiology of S. capricornutum sufficiently to impact partitioning, which may then ultimately affect mercury levels in higher trophic species.

  12. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  13. New exponential stability criteria for stochastic BAM neural networks with impulses

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Samidurai, R.; Anthoni, S. M.

    2010-10-01

    In this paper, we study the global exponential stability of time-delayed stochastic bidirectional associative memory neural networks with impulses and Markovian jumping parameters. A generalized activation function is considered, and traditional assumptions on the boundedness, monotony and differentiability of activation functions are removed. We obtain a new set of sufficient conditions in terms of linear matrix inequalities, which ensures the global exponential stability of the unique equilibrium point for stochastic BAM neural networks with impulses. The Lyapunov function method with the Itô differential rule is employed for achieving the required result. Moreover, a numerical example is provided to show that the proposed result improves the allowable upper bound of delays over some existing results in the literature.

  14. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  15. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  16. Analysis of the incomplete Galerkin method for modelling of smoothly-irregular transition between planar waveguides

    NASA Astrophysics Data System (ADS)

    Divakov, D.; Sevastianov, L.; Nikolaev, N.

    2017-01-01

    The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.

  17. Prediction of Unsteady Aerodynamic Coefficients at High Angles of Attack

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Murphy, Patrick C.; Klein, Vladislav; Brandon, Jay M.

    2001-01-01

    The nonlinear indicial response method is used to model the unsteady aerodynamic coefficients in the low speed longitudinal oscillatory wind tunnel test data of the 0.1 scale model of the F-16XL aircraft. Exponential functions are used to approximate the deficiency function in the indicial response. Using one set of oscillatory wind tunnel data and parameter identification method, the unknown parameters in the exponential functions are estimated. The genetic algorithm is used as a least square minimizing algorithm. The assumed model structures and parameter estimates are validated by comparing the predictions with other sets of available oscillatory wind tunnel test data.

  18. Using neural networks to represent potential surfaces as sums of products.

    PubMed

    Manzhos, Sergei; Carrington, Tucker

    2006-11-21

    By using exponential activation functions with a neural network (NN) method we show that it is possible to fit potentials to a sum-of-products form. The sum-of-products form is desirable because it reduces the cost of doing the quadratures required for quantum dynamics calculations. It also greatly facilitates the use of the multiconfiguration time dependent Hartree method. Unlike potfit product representation algorithm, the new NN approach does not require using a grid of points. It also produces sum-of-products potentials with fewer terms. As the number of dimensions is increased, we expect the advantages of the exponential NN idea to become more significant.

  19. A nonperturbative light-front coupled-cluster method

    NASA Astrophysics Data System (ADS)

    Hiller, J. R.

    2012-10-01

    The nonperturbative Hamiltonian eigenvalue problem for bound states of a quantum field theory is formulated in terms of Dirac's light-front coordinates and then approximated by the exponential-operator technique of the many-body coupled-cluster method. This approximation eliminates any need for the usual approximation of Fock-space truncation. Instead, the exponentiated operator is truncated, and the terms retained are determined by a set of nonlinear integral equations. These equations are solved simultaneously with an effective eigenvalue problem in the valence sector, where the number of constituents is small. Matrix elements can be calculated, with extensions of techniques from standard coupled-cluster theory, to obtain form factors and other observables.

  20. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  1. A strategy to couple the material point method (MPM) and smoothed particle hydrodynamics (SPH) computational techniques

    NASA Astrophysics Data System (ADS)

    Raymond, Samuel J.; Jones, Bruce; Williams, John R.

    2018-01-01

    A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.

  2. A robust method of thin plate spline and its application to DEM construction

    NASA Astrophysics Data System (ADS)

    Chen, Chuanfa; Li, Yanyan

    2012-11-01

    In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.

  3. Equipercentile Test Equating: The Effects of Presmoothing and Postsmoothing on the Magnitude of Sample-Dependent Errors.

    DTIC Science & Technology

    1985-04-01

    EM 32 12 MICROCOP REOUTO TETCHR NTOA B URA FSA4ARS16- AFHRL-TR-84-64 9 AIR FORCE 6 __ H EQUIPERCENTILE TEST EQUATING: THE EFFECTS OF PRESMOOTHING AND...combined or compound presmoother and a presmoothing method based on a particular model of test scores. Of the seven methods of presmoothing the score...unsmoothed distributions, the smoothing of that sequence of differences by the same compound method, and, finally, adding the smoothed differences back

  4. Smooth Sensor Motion Planning for Robotic Cyber Physical Social Sensing (CPSS)

    PubMed Central

    Tang, Hong; Li, Liangzhi; Xiao, Nanfeng

    2017-01-01

    Although many researchers have begun to study the area of Cyber Physical Social Sensing (CPSS), few are focused on robotic sensors. We successfully utilize robots in CPSS, and propose a sensor trajectory planning method in this paper. Trajectory planning is a fundamental problem in mobile robotics. However, traditional methods are not suited for robotic sensors, because of their low efficiency, instability, and non-smooth-generated paths. This paper adopts an optimizing function to generate several intermediate points and regress these discrete points to a quintic polynomial which can output a smooth trajectory for the robotic sensor. Simulations demonstrate that our approach is robust and efficient, and can be well applied in the CPSS field. PMID:28218649

  5. Vibration Sensor-Based Bearing Fault Diagnosis Using Ellipsoid-ARTMAP and Differential Evolution Algorithms

    PubMed Central

    Liu, Chang; Wang, Guofeng; Xie, Qinglu; Zhang, Yanchao

    2014-01-01

    Effective fault classification of rolling element bearings provides an important basis for ensuring safe operation of rotating machinery. In this paper, a novel vibration sensor-based fault diagnosis method using an Ellipsoid-ARTMAP network (EAM) and a differential evolution (DE) algorithm is proposed. The original features are firstly extracted from vibration signals based on wavelet packet decomposition. Then, a minimum-redundancy maximum-relevancy algorithm is introduced to select the most prominent features so as to decrease feature dimensions. Finally, a DE-based EAM (DE-EAM) classifier is constructed to realize the fault diagnosis. The major characteristic of EAM is that the sample distribution of each category is realized by using a hyper-ellipsoid node and smoothing operation algorithm. Therefore, it can depict the decision boundary of disperse samples accurately and effectively avoid over-fitting phenomena. To optimize EAM network parameters, the DE algorithm is presented and two objectives, including both classification accuracy and nodes number, are simultaneously introduced as the fitness functions. Meanwhile, an exponential criterion is proposed to realize final selection of the optimal parameters. To prove the effectiveness of the proposed method, the vibration signals of four types of rolling element bearings under different loads were collected. Moreover, to improve the robustness of the classifier evaluation, a two-fold cross validation scheme is adopted and the order of feature samples is randomly arranged ten times within each fold. The results show that DE-EAM classifier can recognize the fault categories of the rolling element bearings reliably and accurately. PMID:24936949

  6. Tunable Coarse Graining for Monte Carlo Simulations of Proteins via Smoothed Energy Tables: Direct and Exchange Simulations

    PubMed Central

    2015-01-01

    Many commonly used coarse-grained models for proteins are based on simplified interaction sites and consequently may suffer from significant limitations, such as the inability to properly model protein secondary structure without the addition of restraints. Recent work on a benzene fluid (LettieriS.; ZuckermanD. M.J. Comput. Chem.2012, 33, 268−27522120971) suggested an alternative strategy of tabulating and smoothing fully atomistic orientation-dependent interactions among rigid molecules or fragments. Here we report our initial efforts to apply this approach to the polar and covalent interactions intrinsic to polypeptides. We divide proteins into nearly rigid fragments, construct distance and orientation-dependent tables of the atomistic interaction energies between those fragments, and apply potential energy smoothing techniques to those tables. The amount of smoothing can be adjusted to give coarse-grained models that range from the underlying atomistic force field all the way to a bead-like coarse-grained model. For a moderate amount of smoothing, the method is able to preserve about 70–90% of the α-helical structure while providing a factor of 3–10 improvement in sampling per unit computation time (depending on how sampling is measured). For a greater amount of smoothing, multiple folding–unfolding transitions of the peptide were observed, along with a factor of 10–100 improvement in sampling per unit computation time, although the time spent in the unfolded state was increased compared with less smoothed simulations. For a β hairpin, secondary structure is also preserved, albeit for a narrower range of the smoothing parameter and, consequently, for a more modest improvement in sampling. We have also applied the new method in a “resolution exchange” setting, in which each replica runs a Monte Carlo simulation with a different degree of smoothing. We obtain exchange rates that compare favorably to our previous efforts at resolution exchange (LymanE.; ZuckermanD. M.J. Chem. Theory Comput.2006, 2, 656−666). PMID:25400525

  7. Statistical properties of links of network: A survey on the shipping lines of Worldwide Marine Transport Network

    NASA Astrophysics Data System (ADS)

    Zhang, Wenjun; Deng, Weibing; Li, Wei

    2018-07-01

    Node properties and node importance identification of networks have been vastly studied in the last decades. While in this work, we analyze the links' properties of networks by taking the Worldwide Marine Transport Network (WMTN) as an example, i.e., statistical properties of the shipping lines of WMTN have been investigated in various aspects: Firstly, we study the feature of loops in the shipping lines by defining the line saturability. It is found that the line saturability decays exponentially with the increase of line length. Secondly, to detect the geographical community structure of shipping lines, the Label Propagation Algorithm with compression of Flow (LPAF) and Multi-Dimensional Scaling (MDS) method are employed, which show rather consistent communities. Lastly, to analyze the redundancy property of shipping lines of different marine companies, the multilayer networks are constructed by aggregating the shipping lines of different marine companies. It is observed that the topological quantities, such as average degree, average clustering coefficient, etc., increase smoothly when marine companies are randomly merged (randomly choose two marine companies, then merge the shipping lines of them together), while the relative entropy decreases when the merging sequence is determined by the Jensen-Shannon distance (choose two marine companies when the Jensen-Shannon distance between them is the lowest). This indicates the low redundancy of shipping lines among different marine companies.

  8. Red blood cell use in Switzerland: trends and demographic challenges

    PubMed Central

    Volken, Thomas; Buser, Andreas; Castelli, Damiano; Fontana, Stefano; Frey, Beat M.; Rüsges-Wolter, Ilka; Sarraj, Amira; Sigle, Jörg; Thierbach, Jutta; Weingand, Tina; Taleghani, Behrouz Mansouri

    2018-01-01

    Background Several studies have raised concerns that future demand for blood products may not be met. The ageing of the general population and the fact that a large proportion of blood products is transfused to elderly patients has been identified as an important driver of blood shortages. The aim of this study was to collect, for the first time, nationally representative data regarding blood donors and transfusion recipients in order to predict the future evolution of blood donations and red blood cell (RBC) use in Switzerland between 2014 and 2035. Materials and methods Blood donor and transfusion recipient data, subdivided by the subjects’ age and gender were obtained from Regional Blood Services and nine large, acute-care hospitals in various regions of Switzerland. Generalised additive regression models and time-series models with exponential smoothing were employed to estimate trends of whole blood donations and RBC transfusions. Results The trend models employed suggested that RBC demand could equal supply by 2018 and could eventually cause an increasing shortfall of up to 77,000 RBC units by 2035. Discussion Our study highlights the need for continuous monitoring of trends of blood donations and blood transfusions in order to take proactive measures aimed at preventing blood shortages in Switzerland. Measures should be taken to improve donor retention in order to prevent a further erosion of the blood donor base. PMID:27723455

  9. Syndromic surveillance using veterinary laboratory data: algorithm combination and customization of alerts.

    PubMed

    Dórea, Fernanda C; McEwen, Beverly J; McNab, W Bruce; Sanchez, Javier; Revie, Crawford W

    2013-01-01

    Syndromic surveillance research has focused on two main themes: the search for data sources that can provide early disease detection; and the development of efficient algorithms that can detect potential outbreak signals. This work combines three algorithms that have demonstrated solid performance in detecting simulated outbreak signals of varying shapes in time series of laboratory submissions counts. These are: the Shewhart control charts designed to detect sudden spikes in counts; the EWMA control charts developed to detect slow increasing outbreaks; and the Holt-Winters exponential smoothing, which can explicitly account for temporal effects in the data stream monitored. A scoring system to detect and report alarms using these algorithms in a complementary way is proposed. The use of multiple algorithms in parallel resulted in increased system sensitivity. Specificity was decreased in simulated data, but the number of false alarms per year when the approach was applied to real data was considered manageable (between 1 and 3 per year for each of ten syndromic groups monitored). The automated implementation of this approach, including a method for on-line filtering of potential outbreak signals is described. The developed system provides high sensitivity for detection of potential outbreak signals while also providing robustness and flexibility in establishing what signals constitute an alarm. This flexibility allows an analyst to customize the system for different syndromes.

  10. Role of exponential apparent diffusion coefficient in characterizing breast lesions by 3.0 Tesla diffusion-weighted magnetic resonance imaging

    PubMed Central

    Kothari, Shweta; Singh, Archana; Das, Utpalendu; Sarkar, Diptendra K; Datta, Chhanda; Hazra, Avijit

    2017-01-01

    Objective: To evaluate the role of exponential apparent diffusion coefficient (ADC) as a tool for differentiating benign and malignant breast lesions. Patients and Methods: This prospective observational study included 88 breast lesions in 77 patients (between 18 and 85 years of age) who underwent 3T breast magnetic resonance imaging (MRI) including diffusion-weighted imaging (DWI) using b-values of 0 and 800 s/mm2 before biopsy. Mean exponential ADC and ADC of benign and malignant lesions obtained from DWI were compared. Receiver operating characteristics (ROC) curve analysis was undertaken to identify any cut-off for exponential ADC and ADC to predict malignancy. P value of <0.05 was considered statistically significant. Histopathology was taken as the gold standard. Results: According to histopathology, 65 lesions were malignant and 23 were benign. The mean ADC and exponential ADC values of malignant lesions were 0.9526 ± 0.203 × 10−3 mm2/s and 0.4774 ± 0.071, respectively, and for benign lesions were 1.48 ± 0.4903 × 10−3 mm2/s and 0.317 ± 0.1152, respectively. For both the parameters, differences were highly significant (P < 0.001). Cut-off value of ≤0.0011 mm2/s (P < 0.0001) for ADC provided 92.3% sensitivity and 73.9% specificity, whereas with an exponential ADC cut-off value of >0.4 (P < 0.0001) for malignant lesions, 93.9% sensitivity and 82.6% specificity was obtained. The performance of ADC and exponential ADC in distinguishing benign and malignant breast lesions based on respective cut-offs was comparable (P = 0.109). Conclusion: Exponential ADC can be used as a quantitative adjunct tool for characterizing breast lesions with comparable sensitivity and specificity as that of ADC. PMID:28744085

  11. Mitigating Short-Term Variations of Photovoltaic Generation Using Energy Storage with VOLTTRON

    NASA Astrophysics Data System (ADS)

    Morrissey, Kevin

    A smart-building communications system performs smoothing on photovoltaic (PV) power generation using a battery energy storage system (BESS). The system runs using VOLTTRON(TM), a multi-agent python-based software platform dedicated to power systems. The VOLTTRON(TM) system designed for this project runs synergistically with the larger University of Washington VOLTTRON(TM) environment, which is designed to operate UW device communications and databases as well as to perform real-time operations for research. One such research algorithm that operates simultaneously with this PV Smoothing System is an energy cost optimization system which optimizes net demand and associated cost throughout a day using the BESS. The PV Smoothing System features an active low-pass filter with an adaptable time constant, as well as adjustable limitations on the output power and accumulated battery energy of the BESS contribution. The system was analyzed using 26 days of PV generation at 1-second resolution. PV smoothing was studied with unconstrained BESS contribution as well as under a broad range of BESS constraints analogous to variable-sized storage. It was determined that a large inverter output power was more important for PV smoothing than a large battery energy capacity. Two methods of selecting the time constant in real time, static and adaptive, are studied for their impact on system performance. It was found that both systems provide a high level of PV smoothing performance, within 8% of the ideal case where the best time constant is known ahead of time. The system was run in real time using VOLTTRON(TM) with BESS limitations of 5 kW/6.5 kWh and an adaptive update period of 7 days. The system behaved as expected given the BESS parameters and time constant selection methods, providing smoothing on the PV generation and updating the time constant periodically using the adaptive time constant selection method.

  12. Exponential asymptotics of homoclinic snaking

    NASA Astrophysics Data System (ADS)

    Dean, A. D.; Matthews, P. C.; Cox, S. M.; King, J. R.

    2011-12-01

    We study homoclinic snaking in the cubic-quintic Swift-Hohenberg equation (SHE) close to the onset of a subcritical pattern-forming instability. Application of the usual multiple-scales method produces a leading-order stationary front solution, connecting the trivial solution to the patterned state. A localized pattern may therefore be constructed by matching between two distant fronts placed back-to-back. However, the asymptotic expansion of the front is divergent, and hence should be truncated. By truncating optimally, such that the resultant remainder is exponentially small, an exponentially small parameter range is derived within which stationary fronts exist. This is shown to be a direct result of the 'locking' between the phase of the underlying pattern and its slowly varying envelope. The locking mechanism remains unobservable at any algebraic order, and can only be derived by explicitly considering beyond-all-orders effects in the tail of the asymptotic expansion, following the method of Kozyreff and Chapman as applied to the quadratic-cubic SHE (Chapman and Kozyreff 2009 Physica D 238 319-54, Kozyreff and Chapman 2006 Phys. Rev. Lett. 97 44502). Exponentially small, but exponentially growing, contributions appear in the tail of the expansion, which must be included when constructing localized patterns in order to reproduce the full snaking diagram. Implicit within the bifurcation equations is an analytical formula for the width of the snaking region. Due to the linear nature of the beyond-all-orders calculation, the bifurcation equations contain an analytically indeterminable constant, estimated in the previous work by Chapman and Kozyreff using a best fit approximation. A more accurate estimate of the equivalent constant in the cubic-quintic case is calculated from the iteration of a recurrence relation, and the subsequent analytical bifurcation diagram compared with numerical simulations, with good agreement.

  13. Feedback control policies employed by people using intracortical brain-computer interfaces.

    PubMed

    Willett, Francis R; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A; Memberg, William D; Blabe, Christine H; Saab, Jad; Walter, Benjamin L; Sweet, Jennifer A; Miller, Jonathan P; Henderson, Jaimie M; Shenoy, Krishna V; Simeral, John D; Hochberg, Leigh R; Kirsch, Robert F; Ajiboye, A Bolu

    2017-02-01

    When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a 'feedback control policy'. A better understanding of these policies may inform the design of higher-performing neural decoders. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users' feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user's neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor's current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.

  14. Feedback control policies employed by people using intracortical brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Willett, Francis R.; Pandarinath, Chethan; Jarosiewicz, Beata; Murphy, Brian A.; Memberg, William D.; Blabe, Christine H.; Saab, Jad; Walter, Benjamin L.; Sweet, Jennifer A.; Miller, Jonathan P.; Henderson, Jaimie M.; Shenoy, Krishna V.; Simeral, John D.; Hochberg, Leigh R.; Kirsch, Robert F.; Bolu Ajiboye, A.

    2017-02-01

    Objective. When using an intracortical BCI (iBCI), users modulate their neural population activity to move an effector towards a target, stop accurately, and correct for movement errors. We call the rules that govern this modulation a ‘feedback control policy’. A better understanding of these policies may inform the design of higher-performing neural decoders. Approach. We studied how three participants in the BrainGate2 pilot clinical trial used an iBCI to control a cursor in a 2D target acquisition task. Participants used a velocity decoder with exponential smoothing dynamics. Through offline analyses, we characterized the users’ feedback control policies by modeling their neural activity as a function of cursor state and target position. We also tested whether users could adapt their policy to different decoder dynamics by varying the gain (speed scaling) and temporal smoothing parameters of the iBCI. Main results. We demonstrate that control policy assumptions made in previous studies do not fully describe the policies of our participants. To account for these discrepancies, we propose a new model that captures (1) how the user’s neural population activity gradually declines as the cursor approaches the target from afar, then decreases more sharply as the cursor comes into contact with the target, (2) how the user makes constant feedback corrections even when the cursor is on top of the target, and (3) how the user actively accounts for the cursor’s current velocity to avoid overshooting the target. Further, we show that users can adapt their control policy to decoder dynamics by attenuating neural modulation when the cursor gain is high and by damping the cursor velocity more strongly when the smoothing dynamics are high. Significance. Our control policy model may help to build better decoders, understand how neural activity varies during active iBCI control, and produce better simulations of closed-loop iBCI movements.

  15. [Hyperspectral Remote Sensing Estimation Models for Pasture Quality].

    PubMed

    Ma, Wei-wei; Gong, Cai-lan; Hu, Yong; Wei, Yong-lin; Li, Long; Liu, Feng-yi; Meng, Peng

    2015-10-01

    Crude protein (CP), crude fat (CFA) and crude fiber (CFI) are key indicators for evaluation of the quality and feeding value of pasture. Hence, identification of these biological contents is an essential practice for animal husbandry. As current approaches to pasture quality estimation are time-consuming and costly, and even generate hazardous waste, a real-time and non-destructive method is therefore developed in this study using pasture canopy hyperspectral data. A field campaign was carried out in August 2013 around Qinghai Lake in order to obtain field spectral properties of 19 types of natural pasture using the ASD Field Spec 3, a field spectrometer that works in the optical region (350-2 500 nm) of the electromagnetic spectrum. In additional to the spectral data, pasture samples were also collected from the field and examined in laboratory to measure the relative concentration of CP (%), CFA (%) and CFI (%). After spectral denoising and smoothing, the relationship of pasture quality parameters with the reflectance spectrum, the first derivatives of reflectance (FDR), band ratio and the wavelet coefficients (WCs) was analyzed respectively. The concentration of CP, CFA and CFI of pasture was found closely correlated with FDR with wavebands centered at 424, 1 668, and 918 nm as well as with the low-scale (scale = 2, 4) Morlet, Coiflets and Gassian WCs. Accordingly, the linear, exponential, and polynomial equations between each pasture variable and FDR or WCs were developed. Validation of the developed equations indicated that the polynomial model with an independent variable of Coiflets WCs (scale = 4, wavelength =1 209 nm), the polynomial model with an independent variable of FDR, and the exponential model with an independent variable of FDR were the optimal model for prediction of concentration of CP, CFA and CFI of pasture, respectively. The R2 of the pasture quality estimation models was between 0.646 and 0.762 at the 0.01 significance level. Results suggest that the first derivatives or the wavelet coefficients of hyperspectral reflectance in visible and near-infrared regions can be used for pasture quality estimation, and that it will provide a basis for real-time prediction of pasture quality using remote sensing techniques.

  16. Auxiliary Parameter MCMC for Exponential Random Graph Models

    NASA Astrophysics Data System (ADS)

    Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro

    2016-11-01

    Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.

  17. Ex vivo pharmacology of surgical samples of the uterosacral ligament. Part I: Effects of carbachol and oxytocin on smooth muscle.

    PubMed

    Drews, Ulrich; Renz, Matthias; Busch, Christian; Reisenauer, Christl

    2012-11-01

    In a previous study we observed impaired smooth muscle in the uterosacral ligament (USL) of patients with pelvic organ prolapse. The aims of the study were to describe the method of the novel microperfusion system and to determine normal function and pharmacology of smooth muscle in the USL. Samples from the USL were obtained during hysterectomy for benign reasons. Small stretches of connective tissue were mounted in a perfusion chamber under the stereomicroscope. Isotonic contractions of smooth muscle were monitored by digital time-lapse video and quantified by image processing. Constant perfusion with carbachol elicited tonic and pulse stimulation with carbachol and oxytocin rhythmic contractions of smooth muscle in the ground reticulum. Under constant perfusion with relaxin the tonic contraction after carbachol was abolished. With the novel microperfusion system, isotonic contractions of smooth muscle in the USL can be recorded and quantified in the tissue microenvironment on the microscopic level. The USL smooth muscle is cholinergic, stimulated by oxytocin and modulated by relaxin. Copyright © 2012 Wiley Periodicals, Inc.

  18. Steady-state shear flows via nonequilibrium molecular dynamics and smooth-particle applied mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Posch, H.A.; Hoover, W.G.; Kum, O.

    1995-08-01

    We simulate both microscopic and macroscopic shear flows in two space dimensions using nonequilibrium molecular dynamics and smooth-particle applied mechanics. The time-reversible {ital microscopic} equations of motion are isomorphic to the smooth-particle description of inviscid {ital macroscopic} continuum mechanics. The corresponding microscopic particle interactions are relatively weak and long ranged. Though conventional Green-Kubo theory suggests instability or divergence in two-dimensional flows, we successfully define and measure a finite shear viscosity coefficient by simulating stationary plane Couette flow. The special nature of the weak long-ranged smooth-particle functions corresponds to an unusual kind of microscopic transport. This microscopic analog is mainly kinetic,more » even at high density. For the soft Lucy potential which we use in the present work, nearly all the system energy is potential, but the resulting shear viscosity is nearly all kinetic. We show that the measured shear viscosities can be understood, in terms of a simple weak-scattering model, and that this understanding is useful in assessing the usefulness of continuum simulations using the smooth-particle method. We apply that method to the Rayleigh-Benard problem of thermally driven convection in a gravitational field.« less

  19. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate 'bond and peel' method.

    PubMed

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-15

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate 'bond and peel' method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  20. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate ‘bond and peel’ method

    NASA Astrophysics Data System (ADS)

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-01

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate ‘bond and peel’ method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  1. Exponential Megapriming PCR (EMP) Cloning—Seamless DNA Insertion into Any Target Plasmid without Sequence Constraints

    PubMed Central

    Ulrich, Alexander; Andersen, Kasper R.; Schwartz, Thomas U.

    2012-01-01

    We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts. PMID:23300917

  2. Exponential megapriming PCR (EMP) cloning--seamless DNA insertion into any target plasmid without sequence constraints.

    PubMed

    Ulrich, Alexander; Andersen, Kasper R; Schwartz, Thomas U

    2012-01-01

    We present a fast, reliable and inexpensive restriction-free cloning method for seamless DNA insertion into any plasmid without sequence limitation. Exponential megapriming PCR (EMP) cloning requires two consecutive PCR steps and can be carried out in one day. We show that EMP cloning has a higher efficiency than restriction-free (RF) cloning, especially for long inserts above 2.5 kb. EMP further enables simultaneous cloning of multiple inserts.

  3. A generalized exponential link function to map a conflict indicator into severity index within safety continuum framework.

    PubMed

    Zheng, Lai; Ismail, Karim

    2017-05-01

    Traffic conflict indicators measure the temporal and spatial proximity of conflict-involved road users. These indicators can reflect the severity of traffic conflicts to a reliable extent. Instead of using the indicator value directly as a severity index, many link functions have been developed to map the conflict indicator to a severity index. However, little information is available about the choice of a particular link function. To guard against link misspecification or subjectivity, a generalized exponential link function was developed. The severity index generated by this link was introduced to a parametric safety continuum model which objectively models the centre and tail regions. An empirical method, together with full Bayesian estimation method was adopted to estimate model parameters. The safety implication of return level was calculated based on the model parameters. The proposed approach was applied to the conflict and crash data collected from 21 segments from three freeways located in Guangdong province, China. The Pearson's correlation test between return levels and observed crashes showed that a θ value of 1.2 was the best choice of the generalized parameter for current data set. This provides statistical support for using the generalized exponential link function. With the determined generalized exponential link function, the visualization of parametric safety continuum was found to be a gyroscope-shaped hierarchy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A Numerical Method for Calculating Stellar Occultation Light Curves from an Arbitrary Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Chamberlain, D. M.; Elliot, J. L.

    1997-01-01

    We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.

  5. Modelling wildland fire propagation by tracking random fronts

    NASA Astrophysics Data System (ADS)

    Pagnini, G.; Mentrelli, A.

    2014-08-01

    Wildland fire propagation is studied in the literature by two alternative approaches, namely the reaction-diffusion equation and the level-set method. These two approaches are considered alternatives to each other because the solution of the reaction-diffusion equation is generally a continuous smooth function that has an exponential decay, and it is not zero in an infinite domain, while the level-set method, which is a front tracking technique, generates a sharp function that is not zero inside a compact domain. However, these two approaches can indeed be considered complementary and reconciled. Turbulent hot-air transport and fire spotting are phenomena with a random nature and they are extremely important in wildland fire propagation. Consequently, the fire front gets a random character, too; hence, a tracking method for random fronts is needed. In particular, the level-set contour is randomised here according to the probability density function of the interface particle displacement. Actually, when the level-set method is developed for tracking a front interface with a random motion, the resulting averaged process emerges to be governed by an evolution equation of the reaction-diffusion type. In this reconciled approach, the rate of spread of the fire keeps the same key and characterising role that is typical of the level-set approach. The resulting model emerges to be suitable for simulating effects due to turbulent convection, such as fire flank and backing fire, the faster fire spread being because of the actions by hot-air pre-heating and by ember landing, and also due to the fire overcoming a fire-break zone, which is a case not resolved by models based on the level-set method. Moreover, from the proposed formulation, a correction follows for the formula of the rate of spread which is due to the mean jump length of firebrands in the downwind direction for the leeward sector of the fireline contour. The presented study constitutes a proof of concept, and it needs to be subjected to a future validation.

  6. A Comparison of Spatial Analysis Methods for the Construction of Topographic Maps of Retinal Cell Density

    PubMed Central

    Garza-Gisholt, Eduardo; Hemmi, Jan M.; Hart, Nathan S.; Collin, Shaun P.

    2014-01-01

    Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed ‘by eye’. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation ‘respects’ the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the ‘noise’ caused by artefacts and permits a clearer representation of the dominant, ‘real’ distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome. PMID:24747568

  7. Stationarity and periodicities of linear speed of coronal mass ejection: a statistical signal processing approach

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Anirban; Khondekar, Mofazzal Hossain; Bhattacharjee, Anup Kumar

    2017-09-01

    In this paper initiative has been taken to search the periodicities of linear speed of Coronal Mass Ejection in solar cycle 23. Double exponential smoothing and Discrete Wavelet Transform are being used for detrending and filtering of the CME linear speed time series. To choose the appropriate statistical methodology for the said purpose, Smoothed Pseudo Wigner-Ville distribution (SPWVD) has been used beforehand to confirm the non-stationarity of the time series. The Time-Frequency representation tool like Hilbert Huang Transform and Empirical Mode decomposition has been implemented to unearth the underneath periodicities in the non-stationary time series of the linear speed of CME. Of all the periodicities having more than 95% Confidence Level, the relevant periodicities have been segregated out using Integral peak detection algorithm. The periodicities observed are of low scale ranging from 2-159 days with some relevant periods like 4 days, 10 days, 11 days, 12 days, 13.7 days, 14.5 and 21.6 days. These short range periodicities indicate the probable origin of the CME is the active longitude and the magnetic flux network of the sun. The results also insinuate about the probable mutual influence and causality with other solar activities (like solar radio emission, Ap index, solar wind speed, etc.) owing to the similitude between their periods and CME linear speed periods. The periodicities of 4 days and 10 days indicate the possible existence of the Rossby-type waves or planetary waves in Sun.

  8. Adaptive-weighted Total Variation Minimization for Sparse Data toward Low-dose X-ray Computed Tomography Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  9. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-07

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  10. Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.

    PubMed

    Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John

    2015-12-01

    Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.

  11. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malashko, Ya I; Khabibulin, V M

    We have derived analytical expressions, verified by the methods of numerical simulation, to evaluate the angular divergence of nondiffractive laser beams containing smooth aberrations, i.e., spherical defocusing, astigmatism and toroid. Using these expressions we have formulated the criteria for admissible values of smooth aberrations. (laser applications and other topics in quantum electronics)

  13. Exponentially fitted symplectic Runge-Kutta-Nyström methods derived by partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2013-10-01

    In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.

  14. Functional overestimation due to spatial smoothing of fMRI data.

    PubMed

    Liu, Peng; Calhoun, Vince; Chen, Zikuan

    2017-11-01

    Pearson correlation (simply correlation) is a basic technique for neuroimage function analysis. It has been observed that the spatial smoothing may cause functional overestimation, which however remains a lack of complete understanding. Herein, we present a theoretical explanation from the perspective of correlation scale invariance. For a task-evoked spatiotemporal functional dataset, we can extract the functional spatial map by calculating the temporal correlations (tcorr) of voxel timecourses against the task timecourse. From the relationship between image noise level (changed through spatial smoothing) and the tcorr map calculation, we show that the spatial smoothing causes a noise reduction, which in turn smooths the tcorr map and leads to a spatial expansion on neuroactivity blob estimation. Through numerical simulations and subject experiments, we show that the spatial smoothing of fMRI data may overestimate activation spots in the correlation functional map. Our results suggest a small spatial smoothing (with a smoothing kernel with a full width at half maximum (FWHM) of no more than two voxels) on fMRI data processing for correlation-based functional mapping COMPARISON WITH EXISTING METHODS: In extreme noiselessness, the correlation of scale-invariance property defines a meaningless binary tcorr map. In reality, a functional activity blob in a tcorr map is shaped due to the spoilage of image noise on correlative responses. We may reduce data noise level by smoothing processing, which poses a smoothing effect on correlation. This logic allows us to understand the noise dependence and the smoothing effect of correlation-based fMRI data analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A Fast Smoothing Algorithm for Post-Processing of Surface Reflectance Spectra Retrieved from Airborne Imaging Spectrometer Data

    PubMed Central

    Gao, Bo-Cai; Liu, Ming

    2013-01-01

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022

  16. Cystic Fibrosis Transmembrane Conductance Regulator in Sarcoplasmic Reticulum of Airway Smooth Muscle. Implications for Airway Contractility

    PubMed Central

    Cook, Daniel P.; Rector, Michael V.; Bouzek, Drake C.; Michalski, Andrew S.; Gansemer, Nicholas D.; Reznikov, Leah R.; Li, Xiaopeng; Stroik, Mallory R.; Ostedgaard, Lynda S.; Abou Alaiwa, Mahmoud H.; Thompson, Michael A.; Prakash, Y. S.; Krishnan, Ramaswamy; Meyerholz, David K.; Seow, Chun Y.

    2016-01-01

    Rationale: An asthma-like airway phenotype has been described in people with cystic fibrosis (CF). Whether these findings are directly caused by loss of CF transmembrane conductance regulator (CFTR) function or secondary to chronic airway infection and/or inflammation has been difficult to determine. Objectives: Airway contractility is primarily determined by airway smooth muscle. We tested the hypothesis that CFTR is expressed in airway smooth muscle and directly affects airway smooth muscle contractility. Methods: Newborn pigs, both wild type and with CF (before the onset of airway infection and inflammation), were used in this study. High-resolution immunofluorescence was used to identify the subcellular localization of CFTR in airway smooth muscle. Airway smooth muscle function was determined with tissue myography, intracellular calcium measurements, and regulatory myosin light chain phosphorylation status. Precision-cut lung slices were used to investigate the therapeutic potential of CFTR modulation on airway reactivity. Measurements and Main Results: We found that CFTR localizes to the sarcoplasmic reticulum compartment of airway smooth muscle and regulates airway smooth muscle tone. Loss of CFTR function led to delayed calcium reuptake following cholinergic stimulation and increased myosin light chain phosphorylation. CFTR potentiation with ivacaftor decreased airway reactivity in precision-cut lung slices following cholinergic stimulation. Conclusions: Loss of CFTR alters porcine airway smooth muscle function and may contribute to the airflow obstruction phenotype observed in human CF. Airway smooth muscle CFTR may represent a therapeutic target in CF and other diseases of airway narrowing. PMID:26488271

  17. Design and simulation of origami structures with smooth folds

    PubMed Central

    Peraza Hernandez, E. A.; Lagoudas, D. C.

    2017-01-01

    Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds. This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh. PMID:28484322

  18. Design and simulation of origami structures with smooth folds.

    PubMed

    Peraza Hernandez, E A; Hartl, D J; Lagoudas, D C

    2017-04-01

    Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds . This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh ), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh.

  19. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  20. Scalar-fluid interacting dark energy: Cosmological dynamics beyond the exponential potential

    NASA Astrophysics Data System (ADS)

    Dutta, Jibitesh; Khyllep, Wompherdeiki; Tamanini, Nicola

    2017-01-01

    We extend the dynamical systems analysis of scalar-fluid interacting dark energy models performed in C. G. Boehmer et al., Phys. Rev. D 91, 123002 (2015), 10.1103/PhysRevD.91.123002 by considering scalar field potentials beyond the exponential type. The properties and stability of critical points are examined using a combination of linear analysis, computational methods and advanced mathematical techniques, such as center manifold theory. We show that the interesting results obtained with an exponential potential can generally be recovered also for more complicated scalar field potentials. In particular, employing power law and hyperbolic potentials as examples, we find late time accelerated attractors, transitions from dark matter to dark energy domination with specific distinguishing features, and accelerated scaling solutions capable of solving the cosmic coincidence problem.

  1. Restoring a smooth function from its noisy integrals

    NASA Astrophysics Data System (ADS)

    Goulko, Olga; Prokof'ev, Nikolay; Svistunov, Boris

    2018-05-01

    Numerical (and experimental) data analysis often requires the restoration of a smooth function from a set of sampled integrals over finite bins. We present the bin hierarchy method that efficiently computes the maximally smooth function from the sampled integrals using essentially all the information contained in the data. We perform extensive tests with different classes of functions and levels of data quality, including Monte Carlo data suffering from a severe sign problem and physical data for the Green's function of the Fröhlich polaron.

  2. Evaluating an alternative method for rapid urinary creatinine determination

    EPA Science Inventory

    Creatinine (CR) is an endogenously-produced chemical routinely assayed in urine specimens to assess kidney function, sample dilution. The industry-standard method for CR determination, known as the kinetic Jaffe (KJ) method, relies on an exponential rate of a colorimetric change,...

  3. Smoothed Particle Hydrodynamics and its applications for multiphase flow and reactive transport in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Trask, Nathaniel; Pan, K.

    2016-03-11

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian method based on a meshless discretization of partial differential equations. In this review, we present SPH discretization of the Navier-Stokes and Advection-Diffusion-Reaction equations, implementation of various boundary conditions, and time integration of the SPH equations, and we discuss applications of the SPH method for modeling pore-scale multiphase flows and reactive transport in porous and fractured media.

  4. Responses of enzymatically isolated mammalian vascular smooth muscle cells to pharmacological and electrical stimuli.

    PubMed

    DeFeo, T T; Morgan, K G

    1985-05-01

    A modified method for enzymatically isolating mammalian vascular smooth muscle cells has been developed and tested for ferret portal vein smooth muscle. This method produces a high proportion of fully relaxed cells and these cells appear to have normal pharmacological responsiveness. The ED50 values for both alpha stimulation and potassium depolarization are not significantly different in the isolated cells from those obtained from intact strips of ferret portal vein, suggesting that the enzymatic treatment does not destroy receptors or alter the electrical responsiveness of the cells. It was also possible to demonstrate a vasodilatory action of papaverine, nitroprusside and adenosine directly on the isolated cells indicating that the pathways involved are intact in the isolated cells. This method should be of considerable usefulness, particularly in combination with the new fluorescent indicators and cell sorter techniques which require isolated cells.

  5. Periodic bidirectional associative memory neural networks with distributed delays

    NASA Astrophysics Data System (ADS)

    Chen, Anping; Huang, Lihong; Liu, Zhigang; Cao, Jinde

    2006-05-01

    Some sufficient conditions are obtained for the existence and global exponential stability of a periodic solution to the general bidirectional associative memory (BAM) neural networks with distributed delays by using the continuation theorem of Mawhin's coincidence degree theory and the Lyapunov functional method and the Young's inequality technique. These results are helpful for designing a globally exponentially stable and periodic oscillatory BAM neural network, and the conditions can be easily verified and be applied in practice. An example is also given to illustrate our results.

  6. Global exponential stability of BAM neural networks with time-varying delays: The discrete-time case

    NASA Astrophysics Data System (ADS)

    Raja, R.; Marshal Anthoni, S.

    2011-02-01

    This paper deals with the problem of stability analysis for a class of discrete-time bidirectional associative memory (BAM) neural networks with time-varying delays. By employing the Lyapunov functional and linear matrix inequality (LMI) approach, a new sufficient conditions is proposed for the global exponential stability of discrete-time BAM neural networks. The proposed LMI based results can be easily checked by LMI control toolbox. Moreover, an example is also provided to demonstrate the effectiveness of the proposed method.

  7. s-Ordered Exponential of Quadratic Forms Gained via IWSOP Technique

    NASA Astrophysics Data System (ADS)

    Bazrafkan, M. R.; Shähandeh, F.; Nahvifard, E.

    2012-11-01

    Using the generalized bar{s}-ordered Wigner operator, in which bar{s} is a vector over the field of complex numbers, the technique of integration within an s-ordered product of operators (IWSOP) has been extended to multimode case. We derive the bar{s}-ordered form of the widely applicable multimode exponential of quadratic form exp\\{sum_{i,j = 1}n ai^{dag}\\varLambda_{ij}{aj}\\} , each mode being in some particular order s i , applying this method.

  8. Human Motion Perception and Smooth Eye Movements Show Similar Directional Biases for Elongated Apertures

    NASA Technical Reports Server (NTRS)

    Beutter, Brent R.; Stone, Leland S.

    1997-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  9. Human motion perception and smooth eye movements show similar directional biases for elongated apertures

    NASA Technical Reports Server (NTRS)

    Beutter, B. R.; Stone, L. S.

    1998-01-01

    Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.

  10. Isentropic compressive wave generator impact pillow and method of making same

    DOEpatents

    Barker, Lynn M.

    1985-01-01

    An isentropic compressive wave generator and method of making same. The w generator comprises a disk or flat "pillow" member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.

  11. Isentropic compressive wave generator and method of making same

    DOEpatents

    Barker, L.M.

    An isentropic compressive wave generator and method of making same are disclosed. The wave generator comprises a disk or flat pillow member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.

  12. Method for smoothing the surface of a protective coating

    DOEpatents

    Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur

    2001-01-01

    A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.

  13. Development of methods and specifications for the use of inertial profilers and the international roughness index for newly constructed pavement.

    DOT National Transportation Integrated Search

    2013-06-01

    The Indiana Department of Transportation (INDOT) is currently utilizing a profilograph and the profile index for measuring smoothness : assurance for newly constructed pavements. However, there are benefits to implementing a new IRI based smoothness ...

  14. Exponentially Enhanced Light-Matter Interaction, Cooperativities, and Steady-State Entanglement Using Parametric Amplification

    NASA Astrophysics Data System (ADS)

    Qin, Wei; Miranowicz, Adam; Li, Peng-Bo; Lü, Xin-You; You, J. Q.; Nori, Franco

    2018-03-01

    We propose an experimentally feasible method for enhancing the atom-field coupling as well as the ratio between this coupling and dissipation (i.e., cooperativity) in an optical cavity. It exploits optical parametric amplification to exponentially enhance the atom-cavity interaction and, hence, the cooperativity of the system, with the squeezing-induced noise being completely eliminated. Consequently, the atom-cavity system can be driven from the weak-coupling regime to the strong-coupling regime for modest squeezing parameters, and even can achieve an effective cooperativity much larger than 100. Based on this, we further demonstrate the generation of steady-state nearly maximal quantum entanglement. The resulting entanglement infidelity (which quantifies the deviation of the actual state from a maximally entangled state) is exponentially smaller than the lower bound on the infidelities obtained in other dissipative entanglement preparations without applying squeezing. In principle, we can make an arbitrarily small infidelity. Our generic method for enhancing atom-cavity interaction and cooperativities can be implemented in a wide range of physical systems, and it can provide diverse applications for quantum information processing.

  15. a Fast Segmentation Algorithm for C-V Model Based on Exponential Image Sequence Generation

    NASA Astrophysics Data System (ADS)

    Hu, J.; Lu, L.; Xu, J.; Zhang, J.

    2017-09-01

    For the island coastline segmentation, a fast segmentation algorithm for C-V model method based on exponential image sequence generation is proposed in this paper. The exponential multi-scale C-V model with level set inheritance and boundary inheritance is developed. The main research contributions are as follows: 1) the problems of the "holes" and "gaps" are solved when extraction coastline through the small scale shrinkage, low-pass filtering and area sorting of region. 2) the initial value of SDF (Signal Distance Function) and the level set are given by Otsu segmentation based on the difference of reflection SAR on land and sea, which are finely close to the coastline. 3) the computational complexity of continuous transition are successfully reduced between the different scales by the SDF and of level set inheritance. Experiment results show that the method accelerates the acquisition of initial level set formation, shortens the time of the extraction of coastline, at the same time, removes the non-coastline body part and improves the identification precision of the main body coastline, which automates the process of coastline segmentation.

  16. Inhibition of Smooth Muscle Proliferation by Urea-Based Alkanoic Acids via Peroxisome Proliferator-Activated Receptor α–Dependent Repression of Cyclin D1

    PubMed Central

    Ng, Valerie Y.; Morisseau, Christophe; Falck, John R.; Hammock, Bruce D.; Kroetz, Deanna L.

    2007-01-01

    Objective Proliferation of smooth muscle cells is implicated in cardiovascular complications. Previously, a urea-based soluble epoxide hydrolase inhibitor was shown to attenuate smooth muscle cell proliferation. We examined the possibility that urea-based alkanoic acids activate the nuclear receptor peroxisome proliferator-activated receptor α (PPARα) and the role of PPARα in smooth muscle cell proliferation. Methods and Results Alkanoic acids transactivated PPARα, induced binding of PPARα to its response element, and significantly induced the expression of PPARα-responsive genes, showing their function as PPARα agonists. Furthermore, the alkanoic acids attenuated platelet-derived growth factor–induced smooth muscle cell proliferation via repression of cyclin D1 expression. Using small interfering RNA to decrease endogenous PPARα expression, it was determined that PPARα was partially involved in the cyclin D1 repression. The antiproliferative effects of alkanoic acids may also be attributed to their inhibitory effects on soluble epoxide hydrolase, because epoxyeicosatrienoic acids alone inhibited smooth muscle cell proliferation. Conclusions These results show that attenuation of smooth muscle cell proliferation by urea-based alkanoic acids is mediated, in part, by the activation of PPARα. These acids may be useful for designing therapeutics to treat diseases characterized by excessive smooth muscle cell proliferation. PMID:16917105

  17. Nonmuscle myosin is regulated during smooth muscle contraction.

    PubMed

    Yuen, Samantha L; Ogut, Ozgur; Brozovich, Frank V

    2009-07-01

    The participation of nonmuscle myosin in force maintenance is controversial. Furthermore, its regulation is difficult to examine in a cellular context, as the light chains of smooth muscle and nonmuscle myosin comigrate under native and denaturing electrophoresis techniques. Therefore, the regulatory light chains of smooth muscle myosin (SM-RLC) and nonmuscle myosin (NM-RLC) were purified, and these proteins were resolved by isoelectric focusing. Using this method, intact mouse aortic smooth muscle homogenates demonstrated four distinct RLC isoelectric variants. These spots were identified as phosphorylated NM-RLC (most acidic), nonphosphorylated NM-RLC, phosphorylated SM-RLC, and nonphosphorylated SM-RLC (most basic). During smooth muscle activation, NM-RLC phosphorylation increased. During depolarization, the increase in NM-RLC phosphorylation was unaffected by inhibition of either Rho kinase or PKC. However, inhibition of Rho kinase blocked the angiotensin II-induced increase in NM-RLC phosphorylation. Additionally, force for angiotensin II stimulation of aortic smooth muscle from heterozygous nonmuscle myosin IIB knockout mice was significantly less than that of wild-type littermates, suggesting that, in smooth muscle, activation of nonmuscle myosin is important for force maintenance. The data also demonstrate that, in smooth muscle, the activation of nonmuscle myosin is regulated by Ca(2+)-calmodulin-activated myosin light chain kinase during depolarization and a Rho kinase-dependent pathway during agonist stimulation.

  18. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  19. Modeling of single event transients with dual double-exponential current sources: Implications for logic cell characterization

    DOE PAGES

    Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...

    2015-08-07

    Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less

  20. Operational modal analysis applied to the concert harp

    NASA Astrophysics Data System (ADS)

    Chomette, B.; Le Carrou, J.-L.

    2015-05-01

    Operational modal analysis (OMA) methods are useful to extract modal parameters of operating systems. These methods seem to be particularly interesting to investigate the modal basis of string instruments during operation to avoid certain disadvantages due to conventional methods. However, the excitation in the case of string instruments is not optimal for OMA due to the presence of damped harmonic components and low noise in the disturbance signal. Therefore, the present study investigates the least-square complex exponential (LSCE) and the modified least-square complex exponential methods in the case of a string instrument to identify modal parameters of the instrument when it is played. The efficiency of the approach is experimentally demonstrated on a concert harp excited by some of its strings and the two methods are compared to a conventional modal analysis. The results show that OMA allows us to identify modes particularly present in the instrument's response with a good estimation especially if they are close to the excitation frequency with the modified LSCE method.

  1. Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping

    PubMed Central

    2011-01-01

    Background Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. Results In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Conclusions Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset. PMID:21978359

  2. Adjusting for sampling variability in sparse data: geostatistical approaches to disease mapping.

    PubMed

    Hampton, Kristen H; Serre, Marc L; Gesink, Dionne C; Pilcher, Christopher D; Miller, William C

    2011-10-06

    Disease maps of crude rates from routinely collected health data indexed at a small geographical resolution pose specific statistical problems due to the sparse nature of the data. Spatial smoothers allow areas to borrow strength from neighboring regions to produce a more stable estimate of the areal value. Geostatistical smoothers are able to quantify the uncertainty in smoothed rate estimates without a high computational burden. In this paper, we introduce a uniform model extension of Bayesian Maximum Entropy (UMBME) and compare its performance to that of Poisson kriging in measures of smoothing strength and estimation accuracy as applied to simulated data and the real data example of HIV infection in North Carolina. The aim is to produce more reliable maps of disease rates in small areas to improve identification of spatial trends at the local level. In all data environments, Poisson kriging exhibited greater smoothing strength than UMBME. With the simulated data where the true latent rate of infection was known, Poisson kriging resulted in greater estimation accuracy with data that displayed low spatial autocorrelation, while UMBME provided more accurate estimators with data that displayed higher spatial autocorrelation. With the HIV data, UMBME performed slightly better than Poisson kriging in cross-validatory predictive checks, with both models performing better than the observed data model with no smoothing. Smoothing methods have different advantages depending upon both internal model assumptions that affect smoothing strength and external data environments, such as spatial correlation of the observed data. Further model comparisons in different data environments are required to provide public health practitioners with guidelines needed in choosing the most appropriate smoothing method for their particular health dataset.

  3. A statistical study of decaying kink oscillations detected using SDO/AIA

    NASA Astrophysics Data System (ADS)

    Goddard, C. R.; Nisticò, G.; Nakariakov, V. M.; Zimovets, I. V.

    2016-01-01

    Context. Despite intensive studies of kink oscillations of coronal loops in the last decade, a large-scale statistically significant investigation of the oscillation parameters has not been made using data from the Solar Dynamics Observatory (SDO). Aims: We carry out a statistical study of kink oscillations using extreme ultraviolet imaging data from a previously compiled catalogue. Methods: We analysed 58 kink oscillation events observed by the Atmospheric Imaging Assembly (AIA) on board SDO during its first four years of operation (2010-2014). Parameters of the oscillations, including the initial apparent amplitude, period, length of the oscillating loop, and damping are studied for 120 individual loop oscillations. Results: Analysis of the initial loop displacement and oscillation amplitude leads to the conclusion that the initial loop displacement prescribes the initial amplitude of oscillation in general. The period is found to scale with the loop length, and a linear fit of the data cloud gives a kink speed of Ck = (1330 ± 50) km s-1. The main body of the data corresponds to kink speeds in the range Ck = (800-3300) km s-1. Measurements of 52 exponential damping times were made, and it was noted that at least 21 of the damping profiles may be better approximated by a combination of non-exponential and exponential profiles rather than a purely exponential damping envelope. There are nine additional cases where the profile appears to be purely non-exponential and no damping time was measured. A scaling of the exponential damping time with the period is found, following the previously established linear scaling between these two parameters.

  4. Over-fitting Time Series Models of Air Pollution Health Effects: Smoothing Tends to Bias Non-Null Associations Towards the Null.

    EPA Science Inventory

    Background: Simulation studies have previously demonstrated that time-series analyses using smoothing splines correctly model null health-air pollution associations. Methods: We repeatedly simulated season, meteorology and air quality for the metropolitan area of Atlanta from cyc...

  5. A method for the estimation of dual transmissivities from slug tests

    NASA Astrophysics Data System (ADS)

    Wolny, Filip; Marciniak, Marek; Kaczmarek, Mariusz

    2018-03-01

    Aquifer homogeneity is usually assumed when interpreting the results of pumping and slug tests, although aquifers are essentially heterogeneous. The aim of this study is to present a method of determining the transmissivities of dual-permeability water-bearing formations based on slug tests such as the pressure-induced permeability test. A bi-exponential rate-of-rise curve is typically observed during many of these tests conducted in heterogeneous formations. The work involved analyzing curves deviating from the exponential rise recorded at the Belchatow Lignite Mine in central Poland, where a significant number of permeability tests have been conducted. In most cases, bi-exponential movement was observed in piezometers with a screen installed in layered sediments, each with a different hydraulic conductivity, or in fissured rock. The possibility to identify the flow properties of these geological formations was analyzed. For each piezometer installed in such formations, a set of two transmissivity values was calculated piecewise based on the interpretation algorithm of the pressure-induced permeability test—one value for the first (steeper) part of the obtained rate-of-rise curve, and a second value for the latter part of the curve. The results of transmissivity estimation for each piezometer are shown. The discussion presents the limitations of the interpretational method and suggests future modeling plans.

  6. Coupled large eddy simulation and discrete element model of bedload motion

    NASA Astrophysics Data System (ADS)

    Furbish, D.; Schmeeckle, M. W.

    2011-12-01

    We combine a three-dimensional large eddy simulation of turbulence to a three-dimensional discrete element model of turbulence. The large eddy simulation of the turbulent fluid is extended into the bed composed of non-moving particles by adding resistance terms to the Navier-Stokes equations in accordance with the Darcy-Forchheimer law. This allows the turbulent velocity and pressure fluctuations to penetrate the bed of discrete particles, and this addition of a porous zone results in turbulence structures above the bed that are similar to previous experimental and numerical results for hydraulically-rough beds. For example, we reproduce low-speed streaks that are less coherent than those over smooth-beds due to the episodic outflow of fluid from the bed. Local resistance terms are also added to the Navier-Stokes equations to account for the drag of individual moving particles. The interaction of the spherical particles utilizes a standard DEM soft-sphere Hertz model. We use only a simple drag model to calculate the fluid forces on the particles. The model reproduces an exponential distribution of bedload particle velocities that we have found experimentally using high-speed video of a flat bed of moving sand in a recirculating water flume. The exponential distribution of velocity results from the motion of many particles that are nearly constantly in contact with other bed particles and come to rest after short distances, in combination with a relatively few particles that are entrained further above the bed and have velocities approaching that of the fluid. Entrainment and motion "hot spots" are evident that are not perfectly correlated with the local, instantaneous fluid velocity. Zones of the bed that have recently experienced motion are more susceptible to motion because of the local configuration of particle contacts. The paradigm of a characteristic saltation hop length in riverine bedload transport has infused many aspects of geomorphic thought, including even bedrock erosion. In light of our theoretical, experimental, and numerical findings supporting the exponential distribution of bedload particle motion, the idea of a characteristic saltation hop should be scrapped or substantially modified.

  7. Algebraic approach to electronic spectroscopy and dynamics.

    PubMed

    Toutounji, Mohamad

    2008-04-28

    Lie algebra, Zassenhaus, and parameter differentiation techniques are utilized to break up the exponential of a bilinear Hamiltonian operator into a product of noncommuting exponential operators by the virtue of the theory of Wei and Norman [J. Math. Phys. 4, 575 (1963); Proc. Am. Math. Soc., 15, 327 (1964)]. There are about three different ways to find the Zassenhaus exponents, namely, binomial expansion, Suzuki formula, and q-exponential transformation. A fourth, and most reliable method, is provided. Since linearly displaced and distorted (curvature change upon excitation/emission) Hamiltonian and spin-boson Hamiltonian may be classified as bilinear Hamiltonians, the presented algebraic algorithm (exponential operator disentanglement exploiting six-dimensional Lie algebra case) should be useful in spin-boson problems. The linearly displaced and distorted Hamiltonian exponential is only treated here. While the spin-boson model is used here only as a demonstration of the idea, the herein approach is more general and powerful than the specific example treated. The optical linear dipole moment correlation function is algebraically derived using the above mentioned methods and coherent states. Coherent states are eigenvectors of the bosonic lowering operator a and not of the raising operator a(+). While exp(a(+)) translates coherent states, exp(a(+)a(+)) operation on coherent states has always been a challenge, as a(+) has no eigenvectors. Three approaches, and the results, of that operation are provided. Linear absorption spectra are derived, calculated, and discussed. The linear dipole moment correlation function for the pure quadratic coupling case is expressed in terms of Legendre polynomials to better show the even vibronic transitions in the absorption spectrum. Comparison of the present line shapes to those calculated by other methods is provided. Franck-Condon factors for both linear and quadratic couplings are exactly accounted for by the herein calculated linear absorption spectra. This new methodology should easily pave the way to calculating the four-point correlation function, F(tau(1),tau(2),tau(3),tau(4)), of which the optical nonlinear response function may be procured, as evaluating F(tau(1),tau(2),tau(3),tau(4)) is only evaluating the optical linear dipole moment correlation function iteratively over different time intervals, which should allow calculating various optical nonlinear temporal/spectral signals.

  8. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  9. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  10. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  11. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  12. In vivo chlorine and sodium MRI of rat brain at 21.1 T

    PubMed Central

    Elumalai, Malathy; Kitchen, Jason A.; Qian, Chunqi; Gor’kov, Peter L.; Brey, William W.

    2017-01-01

    Object MR imaging of low-gamma nuclei at the ultrahigh magnetic field of 21.1 T provides a new opportunity for understanding a variety of biological processes. Among these, chlorine and sodium are attracting attention for their involvement in brain function and cancer development. Materials and methods MRI of 35Cl and 23Na were performed and relaxation times were measured in vivo in normal rat (n = 3) and in rat with glioma (n = 3) at 21.1 T. The concentrations of both nuclei were evaluated using the center-out back-projection method. Results T1 relaxation curve of chlorine in normal rat head was fitted by bi-exponential function (T1a = 4.8 ms (0.7) T1b = 24.4 ± 7 ms (0.3) and compared with sodium (T1 = 41.4 ms). Free induction decays (FID) of chlorine and sodium in vivo were bi-exponential with similar rapidly decaying components of T2a∗=0.4 ms and T2a∗=0.53 ms, respectively. Effects of small acquisition matrix and bi-exponential FIDs were assessed for quantification of chlorine (33.2 mM) and sodium (44.4 mM) in rat brain. Conclusion The study modeled a dramatic effect of the bi-exponential decay on MRI results. The revealed increased chlorine concentration in glioma (~1.5 times) relative to a normal brain correlates with the hypothesis asserting the importance of chlorine for tumor progression. PMID:23748497

  13. Quantitation of bacteria through adsorption of intracellular biomolecules on carbon paste and screen-printed carbon electrodes and voltammetry of redox-active probes.

    PubMed

    Obuchowska, Agnes

    2008-03-01

    A new electrochemical method for the quantitation of bacteria that is rapid, inexpensive, and amenable to miniaturization is reported. Cyclic voltammetry was used to quantitate M. luteus, C. sporogenes, and E. coli JM105 in exponential and stationary phases, following exposure of screen-printed carbon working electrodes (SPCEs) to lysed culture samples. Ferricyanide was used as a probe. The detection limits (3s) were calculated and the dynamic ranges for E. coli (exponential and stationary phases), M. luteus (exponential and stationary phases), and C. sporogenes (exponential phase) lysed by lysozyme were 3 x 10(4) to 5 x 10(6) colony-forming units (CFU) mL(-1), 5 x 10(6) to 2 x 10(8) CFU mL(-1) and 3 x 10(3) to 3 x 10(5) CFU mL(-1), respectively. Good overlap was obtained between the calibration curves when the electrochemical signal was plotted against the dry bacterial weight, or between the protein concentration in the bacterial lysate. In contrast, unlysed bacteria did not change the electrochemical signal of ferricyanide. The results indicate that the reduction of the electrochemical signal in the presence of the lysate is mainly due to the fouling of the electrode by proteins. Similar results were obtained with carbon-paste electrodes although detection limits were better with SPCEs. The method described herein was applied to quantitation of bacteria in a cooling tower water sample.

  14. Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control.

    PubMed

    Zhang, Wanli; Yang, Shiju; Li, Chuandong; Zhang, Wei; Yang, Xinsong

    2018-08-01

    This paper focuses on stochastic exponential synchronization of delayed memristive neural networks (MNNs) by the aid of systems with interval parameters which are established by using the concept of Filippov solution. New intermittent controller and adaptive controller with logarithmic quantization are structured to deal with the difficulties induced by time-varying delays, interval parameters as well as stochastic perturbations, simultaneously. Moreover, not only control cost can be reduced but also communication channels and bandwidth are saved by using these controllers. Based on novel Lyapunov functions and new analytical methods, several synchronization criteria are established to realize the exponential synchronization of MNNs with stochastic perturbations via intermittent control and adaptive control with or without logarithmic quantization. Finally, numerical simulations are offered to substantiate our theoretical results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The effect of convective boundary condition on MHD mixed convection boundary layer flow over an exponentially stretching vertical sheet

    NASA Astrophysics Data System (ADS)

    Isa, Siti Suzilliana Putri Mohamed; Arifin, Norihan Md.; Nazar, Roslinda; Bachok, Norfifah; Ali, Fadzilah Md

    2017-12-01

    A theoretical study that describes the magnetohydrodynamic mixed convection boundary layer flow with heat transfer over an exponentially stretching sheet with an exponential temperature distribution has been presented herein. This study is conducted in the presence of convective heat exchange at the surface and its surroundings. The system is controlled by viscous dissipation and internal heat generation effects. The governing nonlinear partial differential equations are converted into ordinary differential equations by a similarity transformation. The converted equations are then solved numerically using the shooting method. The results related to skin friction coefficient, local Nusselt number, velocity and temperature profiles are presented for several sets of values of the parameters. The effects of the governing parameters on the features of the flow and heat transfer are examined in detail in this study.

  16. Performance of statistical process control methods for regional surgical site infection surveillance: a 10-year multicentre pilot study.

    PubMed

    Baker, Arthur W; Haridy, Salah; Salem, Joseph; Ilieş, Iulian; Ergai, Awatef O; Samareh, Aven; Andrianas, Nicholas; Benneyan, James C; Sexton, Daniel J; Anderson, Deverick J

    2017-11-24

    Traditional strategies for surveillance of surgical site infections (SSI) have multiple limitations, including delayed and incomplete outbreak detection. Statistical process control (SPC) methods address these deficiencies by combining longitudinal analysis with graphical presentation of data. We performed a pilot study within a large network of community hospitals to evaluate performance of SPC methods for detecting SSI outbreaks. We applied conventional Shewhart and exponentially weighted moving average (EWMA) SPC charts to 10 previously investigated SSI outbreaks that occurred from 2003 to 2013. We compared the results of SPC surveillance to the results of traditional SSI surveillance methods. Then, we analysed the performance of modified SPC charts constructed with different outbreak detection rules, EWMA smoothing factors and baseline SSI rate calculations. Conventional Shewhart and EWMA SPC charts both detected 8 of the 10 SSI outbreaks analysed, in each case prior to the date of traditional detection. Among detected outbreaks, conventional Shewhart chart detection occurred a median of 12 months prior to outbreak onset and 22 months prior to traditional detection. Conventional EWMA chart detection occurred a median of 7 months prior to outbreak onset and 14 months prior to traditional detection. Modified Shewhart and EWMA charts additionally detected several outbreaks earlier than conventional SPC charts. Shewhart and SPC charts had low false-positive rates when used to analyse separate control hospital SSI data. Our findings illustrate the potential usefulness and feasibility of real-time SPC surveillance of SSI to rapidly identify outbreaks and improve patient safety. Further study is needed to optimise SPC chart selection and calculation, statistical outbreak detection rules and the process for reacting to signals of potential outbreaks. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Numerical study of a multigrid method with four smoothing methods for the incompressible Navier-Stokes equations in general coordinates

    NASA Technical Reports Server (NTRS)

    Zeng, S.; Wesseling, P.

    1993-01-01

    The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.

  18. Investigation on filter method for smoothing spiral phase plate

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian

    2018-03-01

    Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.

  19. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    PubMed

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  20. Pressure resistance of cold-shocked Escherichia coli O157:H7 in ground beef, beef gravy and peptone water.

    PubMed

    Baccus-Taylor, G S H; Falloon, O C; Henry, N

    2015-06-01

    (i) To study the effects of cold shock on Escherichia coli O157:H7 cells. (ii) To determine if cold-shocked E. coli O157:H7 cells at stationary and exponential phases are more pressure-resistant than their non-cold-shocked counterparts. (iii) To investigate the baro-protective role of growth media (0·1% peptone water, beef gravy and ground beef). Quantitative estimates of lethality and sublethal injury were made using the differential plating method. There were no significant differences (P > 0·05) in the number of cells killed; cold-shocked or non-cold-shocked. Cells grown in ground beef (stationary and exponential phases) experienced lowest death compared with peptone water and beef gravy. Cold-shock treatment increased the sublethal injury to cells cultured in peptone water (stationary and exponential phases) and ground beef (exponential phase), but decreased the sublethal injury to cells in beef gravy (stationary phase). Cold shock did not confer greater resistance to stationary or exponential phase cells pressurized in peptone water, beef gravy or ground beef. Ground beef had the greatest baro-protective effect. Real food systems should be used in establishing food safety parameters for high-pressure treatments; micro-organisms are less resistant in model food systems, the use of which may underestimate the organisms' resistance. © 2015 The Society for Applied Microbiology.

  1. Modeling Electrokinetic Flows by the Smoothed Profile Method

    PubMed Central

    Luo, Xian; Beskok, Ali; Karniadakis, George Em

    2010-01-01

    We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076

  2. Modal smoothing for analysis of room reflections measured with spherical microphone and loudspeaker arrays.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz

    2018-02-01

    Spatial analysis of room acoustics is an ongoing research topic. Microphone arrays have been employed for spatial analyses with an important objective being the estimation of the direction-of-arrival (DOA) of direct sound and early room reflections using room impulse responses (RIRs). An optimal method for DOA estimation is the multiple signal classification algorithm. When RIRs are considered, this method typically fails due to the correlation of room reflections, which leads to rank deficiency of the cross-spectrum matrix. Preprocessing methods for rank restoration, which may involve averaging over frequency, for example, have been proposed exclusively for spherical arrays. However, these methods fail in the case of reflections with equal time delays, which may arise in practice and could be of interest. In this paper, a method is proposed for systems that combine a spherical microphone array and a spherical loudspeaker array, referred to as multiple-input multiple-output systems. This method, referred to as modal smoothing, exploits the additional spatial diversity for rank restoration and succeeds where previous methods fail, as demonstrated in a simulation study. Finally, combining modal smoothing with a preprocessing method is proposed in order to increase the number of DOAs that can be estimated using low-order spherical loudspeaker arrays.

  3. Linearized radiative transfer models for retrieval of cloud parameters from EPIC/DSCOVR measurements

    NASA Astrophysics Data System (ADS)

    Molina García, Víctor; Sasi, Sruthy; Efremenko, Dmitry S.; Doicu, Adrian; Loyola, Diego

    2018-07-01

    In this paper, we describe several linearized radiative transfer models which can be used for the retrieval of cloud parameters from EPIC (Earth Polychromatic Imaging Camera) measurements. The approaches under examination are (1) the linearized forward approach, represented in this paper by the linearized discrete ordinate and matrix operator methods with matrix exponential, and (2) the forward-adjoint approach based on the discrete ordinate method with matrix exponential. To enhance the performance of the radiative transfer computations, the correlated k-distribution method and the Principal Component Analysis (PCA) technique are used. We provide a compact description of the proposed methods, as well as a numerical analysis of their accuracy and efficiency when simulating EPIC measurements in the oxygen A-band channel at 764 nm. We found that the computation time of the forward-adjoint approach using the correlated k-distribution method in conjunction with PCA is approximately 13 s for simultaneously computing the derivatives with respect to cloud optical thickness and cloud top height.

  4. A fast quadrature-based numerical method for the continuous spectrum biphasic poroviscoelastic model of articular cartilage.

    PubMed

    Stuebner, Michael; Haider, Mansoor A

    2010-06-18

    A new and efficient method for numerical solution of the continuous spectrum biphasic poroviscoelastic (BPVE) model of articular cartilage is presented. Development of the method is based on a composite Gauss-Legendre quadrature approximation of the continuous spectrum relaxation function that leads to an exponential series representation. The separability property of the exponential terms in the series is exploited to develop a numerical scheme that can be reduced to an update rule requiring retention of the strain history at only the previous time step. The cost of the resulting temporal discretization scheme is O(N) for N time steps. Application and calibration of the method is illustrated in the context of a finite difference solution of the one-dimensional confined compression BPVE stress-relaxation problem. Accuracy of the numerical method is demonstrated by comparison to a theoretical Laplace transform solution for a range of viscoelastic relaxation times that are representative of articular cartilage. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  5. [A study of different polishing techniques for amalgams and glass-cermet cement by scanning electron microscope (SEM)].

    PubMed

    Kakaboura, A; Vougiouklakis, G; Argiri, G

    1989-01-01

    Finishing and polishing an amalgam restoration, is considered as an important and necessary step of the restorative procedure. Various polishing techniques have been recommended to success a smooth amalgam surface. The aim of this study was to investigate the influence of three different polishing treatments on the marginal integrity and surface smoothness of restorations made of three commercially available amalgams and a glass-cermet cement. The materials used were the amalgams, Amalcap (Vivadent), Dispersalloy (Johnson and Johnson), Duralloy (Degussa) and the glass-cermet Katac-Silver (ESPE). The occlusal surfaces of the restorations were polished by the methods: I) round bur, No4-rubber cup-zinc oxide paste in a small brush, II) round bur No 4-bur-brown, green and super green (Shofu) polishing cups and points successively and III) amalgam polishing bur of 12-blades-smooth amalgam polishing bur. Photographs from unpolished and polished surfaces of the restorations, were taken with scanning electron microscope, to evaluate the polishing techniques. An improvement of marginal integrity and surface smoothness of all amalgam restorations was observed after the specimens had been polished with the three techniques. Method II, included Shofu polishers, proved the best results in comparison to the methods I and III. Polishing of glass-cermet cement was impossible with the examined techniques.

  6. Manufacture of patient-specific vascular replicas for endovascular simulation using fast, low-cost method

    NASA Astrophysics Data System (ADS)

    Kaneko, Naoki; Mashiko, Toshihiro; Ohnishi, Taihei; Ohta, Makoto; Namba, Katsunari; Watanabe, Eiju; Kawai, Kensuke

    2016-12-01

    Patient-specific vascular replicas are essential to the simulation of endovascular treatment or for vascular research. The inside of silicone replica is required to be smooth for manipulating interventional devices without resistance. In this report, we demonstrate the fabrication of patient-specific silicone vessels with a low-cost desktop 3D printer. We show that the surface of an acrylonitrile butadiene styrene (ABS) model printed by the 3D printer can be smoothed by a single dipping in ABS solvent in a time-dependent manner, where a short dip has less effect on the shape of the model. The vascular mold is coated with transparent silicone and then the ABS mold is dissolved after the silicone is cured. Interventional devices can pass through the inside of the smoothed silicone vessel with lower pushing force compared to the vessel without smoothing. The material cost and time required to fabricate the silicone vessel is about USD $2 and 24 h, which is much lower than the current fabrication methods. This fast and low-cost method offers the possibility of testing strategies before attempting particularly difficult cases, while improving the training of endovascular therapy, enabling the trialing of new devices, and broadening the scope of vascular research.

  7. Boron hydride polymer coated substrates

    DOEpatents

    Pearson, R.K.; Bystroff, R.I.; Miller, D.E.

    1986-08-27

    A method is disclosed for coating a substrate with a uniformly smooth layer of a boron hydride polymer. The method comprises providing a reaction chamber which contains the substrate and the boron hydride plasma. A boron hydride feed stock is introduced into the chamber simultaneously with the generation of a plasma discharge within the chamber. A boron hydride plasma of ions, electrons and free radicals which is generated by the plasma discharge interacts to form a uniformly smooth boron hydride polymer which is deposited on the substrate.

  8. Boron hydride polymer coated substrates

    DOEpatents

    Pearson, Richard K.; Bystroff, Roman I.; Miller, Dale E.

    1987-01-01

    A method is disclosed for coating a substrate with a uniformly smooth layer of a boron hydride polymer. The method comprises providing a reaction chamber which contains the substrate and the boron hydride plasma. A boron hydride feed stock is introduced into the chamber simultaneously with the generation of a plasma discharge within the chamber. A boron hydride plasma of ions, electrons and free radicals which is generated by the plasma discharge interacts to form a uniformly smooth boron hydride polymer which is deposited on the substrate.

  9. Method For Identifying Sedimentary Bodies From Images And Its Application To Mineral Exploration

    NASA Technical Reports Server (NTRS)

    Wilkinson, Murray Justin (Inventor)

    2006-01-01

    A method is disclosed for identifying a sediment accumulation from an image of a part of the earth's surface. The method includes identifying a topographic discontinuity from the image. A river which crosses the discontinuity is identified from the image. From the image, paleocourses of the river are identified which diverge from a point where the river crosses the discontinuity. The paleocourses are disposed on a topographically low side of the discontinuity. A smooth surface which emanates from the point is identified. The smooth surface is also disposed on the topographically low side of the point.

  10. Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.

    ERIC Educational Resources Information Center

    Brant, Rollin

    Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…

  11. Polarization Smoothing Generalized MUSIC Algorithm with Polarization Sensitive Array for Low Angle Estimation.

    PubMed

    Tan, Jun; Nie, Zaiping

    2018-05-12

    Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.

  12. High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium

    NASA Astrophysics Data System (ADS)

    Barnett, Alex H.; Nelson, Bradley J.; Mahoney, J. Matthew

    2015-09-01

    We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve, the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (Δ + E +x2) u (x1 ,x2) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 102 nodes, with an effort that is independent of the frequency parameter E. By combining with a high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.

  13. On Energy Inequality for the Problem on the Evolution of Two Fluids of Different Types Without Surface Tension

    NASA Astrophysics Data System (ADS)

    Denisova, Irina Vlad.

    2015-03-01

    The paper deals with the motion of two immiscible viscous fluids in a container, one of the fluids being compressible while another one being incompressible. The interface between the fluids is an unknown closed surface where surface tension is neglected. We assume the compressible fluid to be barotropic, the pressure being given by an arbitrary smooth increasing function. This problem is considered in anisotropic Sobolev-Slobodetskiǐ spaces. We show that the L 2-norms of the velocity and deviation of compressible fluid density from the mean value decay exponentially with respect to time. The proof is based on a local existence theorem (Denisova, Interfaces Free Bound 2:283-312, 2000) and on the idea of constructing a function of generalized energy, proposed by Padula (J Math Fluid Mech 1:62-77, 1999). In addition, we eliminate the restrictions for the viscosities which appeared in Denisova (Interfaces Free Bound 2:283-312, 2000).

  14. Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.

    2017-07-01

    Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.

  15. Artificial Neural Network versus Linear Models Forecasting Doha Stock Market

    NASA Astrophysics Data System (ADS)

    Yousif, Adil; Elfaki, Faiz

    2017-12-01

    The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.

  16. Random Walks on Homeo( S 1)

    NASA Astrophysics Data System (ADS)

    Malicet, Dominique

    2017-12-01

    In this paper, we study random walks {g_n=f_{n-1}\\ldots f_0} on the group Homeo ( S 1) of the homeomorphisms of the circle, where the homeomorphisms f k are chosen randomly, independently, with respect to a same probability measure {ν}. We prove that under the only condition that there is no probability measure invariant by {ν}-almost every homeomorphism, the random walk almost surely contracts small intervals. It generalizes what has been known on this subject until now, since various conditions on {ν} were imposed in order to get the phenomenon of contractions. Moreover, we obtain the surprising fact that the rate of contraction is exponential, even in the lack of assumptions of smoothness on the f k 's. We deduce various dynamical consequences on the random walk ( g n ): finiteness of ergodic stationary measures, distribution of the trajectories, asymptotic law of the evaluations, etc. The proof of the main result is based on a modification of the Ávila-Viana's invariance principle, working for continuous cocycles on a space fibred in circles.

  17. New results for global exponential synchronization in neural networks via functional differential inclusions.

    PubMed

    Wang, Dongshu; Huang, Lihong; Tang, Longkun

    2015-08-01

    This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.

  18. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  19. A multi-sensor RSS spatial sensing-based robust stochastic optimization algorithm for enhanced wireless tethering.

    PubMed

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-12-12

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the "server-relay-client" framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions.

  20. A Multi-Sensor RSS Spatial Sensing-Based Robust Stochastic Optimization Algorithm for Enhanced Wireless Tethering

    PubMed Central

    Parasuraman, Ramviyas; Fabry, Thomas; Molinari, Luca; Kershaw, Keith; Di Castro, Mario; Masi, Alessandro; Ferre, Manuel

    2014-01-01

    The reliability of wireless communication in a network of mobile wireless robot nodes depends on the received radio signal strength (RSS). When the robot nodes are deployed in hostile environments with ionizing radiations (such as in some scientific facilities), there is a possibility that some electronic components may fail randomly (due to radiation effects), which causes problems in wireless connectivity. The objective of this paper is to maximize robot mission capabilities by maximizing the wireless network capacity and to reduce the risk of communication failure. Thus, in this paper, we consider a multi-node wireless tethering structure called the “server-relay-client” framework that uses (multiple) relay nodes in between a server and a client node. We propose a robust stochastic optimization (RSO) algorithm using a multi-sensor-based RSS sampling method at the relay nodes to efficiently improve and balance the RSS between the source and client nodes to improve the network capacity and to provide redundant networking abilities. We use pre-processing techniques, such as exponential moving averaging and spatial averaging filters on the RSS data for smoothing. We apply a receiver spatial diversity concept and employ a position controller on the relay node using a stochastic gradient ascent method for self-positioning the relay node to achieve the RSS balancing task. The effectiveness of the proposed solution is validated by extensive simulations and field experiments in CERN facilities. For the field trials, we used a youBot mobile robot platform as the relay node, and two stand-alone Raspberry Pi computers as the client and server nodes. The algorithm has been proven to be robust to noise in the radio signals and to work effectively even under non-line-of-sight conditions. PMID:25615734

Top